notes-java

Iterator it=ls.iterator();
while(it.hasNext){
if(it.next.equls(:some”)
{
ls.remove(“some”);
}
}
java.util.ConcurrentModificationException
while you try to delete/remove an elemnt while you iterating it.
when you use Iterator’s remove() then no problem will raise.

1) Both LinkedHashMap and HashMap are not synchronized and subject to race condition if shared between multiple threads without proper synchronization. Use Collections.synchronizedMap() for making them synchronized.

2) Iterator returned by HashMap and LinkedHashMap are fail-fast in nature.

3) Performance of HashMap and LinkedHashMap are similar also.

Difference between LinkedHashMap and HashMap in Java
Now let’s see some differences between LinkedHashMap and HashMap in Java:

1) First and foremost difference between LinkedHashMap and HashMap is order, HashMap doesn’t maintain any order while LinkedHashMap maintains insertion order of elements in Java.

2) LinkedHashMap also requires more memory than HashMap because of this ordering feature. As I said before LinkedHashMap uses doubly LinkedList to keep order of elements.

3) LinkedHashMap actually extends HashMap and implements Map interface.

Few things to note, while using LinkedHashMap in Java

1) Default ordering provided by LinkedHashMap is the order on which key is inserted, known as insertion order, but LinkedHashMap can be created with another ordering called access order, which is defined by accessing entries.

2) Re-entering a mapping, doesn’t alter insertion order of LinkedHashMap. For example, if you already have mapping for a key, and want to update it’s value by calling put(key, newValue), insertion order of LinkedHashMap will remain same.

3) Access order is affected by calling get(key), put(key, value) or putAll(). When a particular entry is accessed, it moves towards end of the doubly linked list, maintained by LinkedHashMap.

4) LinkedHashMap can be used to create LRU cache in Java. Since in LRU or Least Recently Used Cache, oldest non accessed entry is removed, which is the head of the doubly linked list maintained by LinkedHashMap.

5) Iterator of LinkedHashMap returns elements in the order e.g. either insertion order or access order.

6) LinkedHashMap also provides a method called removeEldestEntry(), which is protected and default implementation return false. If overridden, an implementation can return true to remove oldest entry, when a new entry is added.

Given the insertion order guarantee of LinkedHashMap, Its a good compromise between HashMap and TreeMap in Java because with TreeMap you get increased cost of iteration due to sorting and performance drops on to log(n) level from constant time. That’s all about difference between LinkedHashMap and HashMap in Java.

Read more: http://www.java67.com/2012/08/difference-between-hashmap-and-LinkedHashMap-Java.html#ixzz5Aeogyd23

Collections.sort(entry, comparator);
Only applicable for List not for set,mapping

One of the key advantage of Strategy pattern is it’s extensibility, i.e. introducing new Strategy is as easy as writing a new class and implementing Strategy interface, with Enum, instead of creating separate class, you creates a separate Enum instance, which means less number of classes and full benefit if Strategy pattern.

Read more: http://javarevisited.blogspot.com/2014/11/strategy-design-pattern-in-java-using-Enum-Example.html#ixzz5Af9cjMpm

Strategy Pattern Example using Enum
Here is full code example of implementing Strategy design pattern using Enum in Java. If you are coding in Eclipse IDE then you don’t need to do much, just select and copy this code, select the Java project you are working in Eclipse IDE and paste it. Eclipse will take care of creating right packages and source file with proper name e.g. name of public class. This is the quickest way to try and execute Java code snippet from internet in your IDE. Also remember, Strategy pattern is a good example of Open Closed Design Principle of SOLID object oriented design principle.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
* Java program to demonstrate that Enum can be used to implement Strategy
* Pattern in Java.
*
* @author Javin
*/
public class Match {

private static final Logger logger = LoggerFactory.getLogger(Match.class);

public static void main(String args[]) {

Player ctx = new Player(Strategy.T20);
ctx.play();

ctx.setStrategy(Strategy.ONE_DAY);
ctx.play();

ctx.setStrategy(Strategy.TEST);
ctx.play();

}

}

/*
* Player class, which uses different Strategy implementation.
*/
class Player{
private Strategy battingStrategy;

public Player(Strategy battingStrategy){
this.battingStrategy = battingStrategy;
}

public void setStrategy(Strategy newStrategy){
this.battingStrategy = newStrategy;
}

public void play(){
battingStrategy.play();
}
}

/*
* An Enum to implement Strategy design pattern in Java. Different instances of
* Enum represent different batting strategy, based upon type of game e.g. T20,
* One day international or Test match.
*/
enum Strategy {

/* Make sure to score quickly on T20 games */
T20 {

@Override
public void play() {
System.out.printf(“In %s, If it’s in the V, make sure it goes to tree %n”, name());
}
},

/* Make a balance between attach and defence in One day */
ONE_DAY {

@Override
public void play() {
System.out.printf(“In %s, Push it for Single %n”, name());
}
},

/* Test match is all about occupying the crease and grinding opposition */
TEST {

@Override
public void play() {
System.out.printf(“In %s, Grind them hard %n”, name());
}
};

public void play() {
System.out.printf(“In Cricket, Play as per Merit of Ball %n”);
}
}

Output:
In T20, If it’s in the V, make sure it goes to tree
In ONE_DAY, Push it for Single
In TEST, Grind them hard

Read more: http://javarevisited.blogspot.com/2014/11/strategy-design-pattern-in-java-using-Enum-Example.html#ixzz5Af9qBdJW

linkedlist => list + deque

Deque interface in Java with Example
The java.util.Deque interface is a subtype of the java.util.Queue interface. The Deque is related to the double-ended queue that supports addition or removal of elements from either end of the data structure, it can be used as a queue (first-in-first-out/FIFO) or as a stack (last-in-first-out/LIFO).

Operations on Queue :

Add()-Adds an element at the tail of queue. More specifically, at the last of linkedlist if it is used, or according to the priority in case of priority queue implementation.
peek()-To view the head of queue without removing it. Returns null if queue is empty.
element()-Similar to peek(). Throws NoSuchElementException if queue is empty.
remove()-Removes and returns the head of the queue. Throws NoSuchElementException when queue is impty.
poll()-Removes and returns the head of the queue. Returns null if queue is empty.

SortedMap is an interface in collection framework. This interface extends Map and provides a total ordering of its elements. Exampled class that implements this interface is TreeMap.

The main characteristic of a SortedMap is that, it orders the keys by their natural ordering, or by a specified comparator. So consider using a TreeMap when you want a map that satisfies the following criteria:

null key or null value are not permitted.
The keys are sorted either by natural ordering or by a specified comparator.
Methods of SortedMap:

subMap(K fromKey, K toKey): Returns a view of the portion of this Map whose keys range from fromKey, inclusive, to toKey, exclusive.
headMap(K toKey): Returns a view of the portion of this Map whose keys are strictly less than toKey.
tailMap(K fromKey): Returns a view of the portion of this Map whose keys are greater than or equal to fromKey.
firstKey(): Returns the first (lowest) key currently in this Map.
lastKey(): Returns the last (highest) key currently in this Map.
comparator(): Returns the Comparator used to order the keys in this Map, or null if this Map uses the natural ordering of its keys.

NavigableMap Interface in Java with Example
NavigableMap is an extension of SortedMap which provides convenient navigation method like lowerKey, floorKey, ceilingKey and higherKey, and along with these popular navigation method it also provide ways to create a Sub Map from existing Map in Java e.g. headMap whose keys are less than specified key, tailMap whose keys are greater than specified key and a subMap which is strictly contains keys which falls between toKey and fromKey.

An example class that implements NavigableMap is TreeMap.

Methods of NavigableMap:

lowerKey(Object key) : Returns the greatest key strictly less than the given key, or if there is no such key.
floorKey(Object key) : Returns the greatest key less than or equal to the given key, or if there is no such key.
ceilingKey(Object key) : Returns the least key greater than or equal to the given key, or if there is no such key.
higherKey(Object key) : Returns the least key strictly greater than the given key, or if there is no such key.
descendingMap() : Returns a reverse order view of the mappings contained in this map.
headMap(object toKey, boolean inclusive) : Returns a view of the portion of this map whose keys are less than (or equal to, if inclusive is true) toKey.
subMap(object fromKey, boolean fromInclusive, object toKey, boolean toInclusive) : Returns a view of the portion of this map whose keys range from fromKey to toKey.
tailMap(object fromKey, boolean inclusive) : Returns a view of the portion of this map whose keys are greater than (or equal to, if inclusive is true) fromKey.
// Java program to demonstrate NavigableMap
import java.util.NavigableMap;
import java.util.TreeMap;

public class Example
{
public static void main(String[] args)
{
NavigableMap nm =
new TreeMap();
nm.put(“C”, 888);
nm.put(“Y”, 999);
nm.put(“A”, 444);
nm.put(“T”, 555);
nm.put(“B”, 666);
nm.put(“A”, 555);

System.out.printf(“Descending Set : %s%n”,
nm.descendingKeySet());
System.out.printf(“Floor Entry : %s%n”,
nm.floorEntry(“L”));
System.out.printf(“First Entry : %s%n”,
nm.firstEntry());
System.out.printf(“Last Key : %s%n”,
nm.lastKey());
System.out.printf(“First Key : %s%n”,
nm.firstKey());
System.out.printf(“Original Map : %s%n”, nm);
System.out.printf(“Reverse Map : %s%n”,
nm.descendingMap());
}
}
Run on IDE
Output:

Descending Set : [Y, T, C, B, A]
Floor Entry : C=888
First Entry : A=555
Last Key : Y
First Key : A
Original Map : {A=555, B=666, C=888, T=555, Y=999}
Reverse Map : {Y=999, T=555, C=888, B=666, A=555}

NavigableSet represents a navigable set in Java Collection Framework. The NavigableSet interface inherits from the SortedSet interface. It behaves like a SortedSet with the exception that we have navigation methods available in addition to the sorting mechanisms of the SortedSet. For example, NavigableSet interface can navigate the set in reverse order compared to the order defined in SortedSet.

The classes that implement this interface are, TreeSet and ConcurrentSkipListSet

Methods of NavigableSet (Not in SortedSet):

Lower(E e) : Returns the greatest element in this set which is less than the given element or NULL if there is no such element.
Floor(E e ) : Returns the greatest element in this set which is less than or equal to given element or NULL if there is no such element.
Ceiling(E e) : Returns the least element in this set which is greater than or equal to given element or NULL if there is no such element.
Higher(E e) : Returns the least element in this set which is greater than the given element or NULL if there is no such element.
pollFirst() : Retrieve and remove the first least element. Or return null if there is no such element.
pollLast() : Retrieve and remove the last highest element. Or return null if there is no such element.

A Map cannot contain duplicate keys and each key can map to at most one value. Some implementations allow null key and null value (HashMap and LinkedHashMap) but some do not (TreeMap).

The order of a map depends on specific implementations, e.g TreeMap and LinkedHashMap have predictable order, while HashMap does not.
Exampled class that implements this interface is HashMap, TreeMap and LinkedHashMap.
A map of error codes and their descriptions.
A map of zip codes and cities.
A map of managers and employees. Each manager (key) is associated with a list of employees (value) he manages.
A map of classes and students. Each class (key) is associated with a list of students (value).

Methods of Map:

public Object put(Object key, Object value) :- is used to insert an entry in this map.
public void putAll(Map map) :- is used to insert the specified map in this map.
public Object remove(Object key) :- is used to delete an entry for the specified key.
public Object get(Object key) :- is used to return the value for the specified key.
public boolean containsKey(Object key) :- is used to search the specified key from this map.
public Set keySet() :- returns the Set view containing all the keys.
public Set entrySet() :- returns the Set view containing all the keys and values.

Iterator in Java is nothing but a traversing object, made specifically for Collection objects like List and Set. we have already aware about different kind of traversing methods like for-loop ,while loop,do-while,for each lop etc,they all are index based traversing but as we know Java is purely object oriented language there is always possible ways of doing things using objects so Iterator is a way to traverse as well as access the data from the collection.

Read more: http://javarevisited.blogspot.com/2011/10/java-iterator-tutorial-example-list.html#ixzz5AlOawF8L

ava iterator is an interface belongs to collection framework allow us to traverse the collection and access the data element
of collection without bothering the user about specific implementation of that collection it.
Basically List and set collection provides the iterator. You can get Iterator from ArrayList,
LinkedList, and TreeSet etc. Map implementation such as HashMap doesn’t provide Iterator directory but you can get there keySet or Value Set
and can iterator through that collection.

Read more: http://javarevisited.blogspot.com/2011/10/java-iterator-tutorial-example-list.html#ixzz5AlPwj9u7

Why use Iterator when we have Enumerator:
Both Iterator and Enumerator is used for traversing of collection but Iterator is more enhanced in terms of extra method remove () it provide us for modify the collection which is not available in enumeration along with that it is more secure it doesn’t allow another thread to modify the collection when some another thread is iterating the collection and throws concurrentModificationException. Along with this method name are also very convenient in Iterator which is not major difference but make use of iterator more easy.

Read more: http://javarevisited.blogspot.com/2011/10/java-iterator-tutorial-example-list.html#ixzz5AlQOc6yW

What is List Iterator in Java?
ListIterator in Java is an Iterator which allows user to traverse Collection like ArrayList and HashSet in both direction by using method previous() and next (). You can obtain ListIterator from all List implementation including ArrayList and LinkedList. ListIterator doesn’t keep current index and its current position is determined by call to next() or previous() based on direction of traversing.

Important point about Iterator in Java:
1) Iterator in Java support generics so always use Generic version of Iterator rather than using Iterator with raw type.

2) If you want to remove objects from Collection than don’t use for-each loop instead use Iterator’s remove() method to avoid any ConcurrentModificationException.

3) Iterating over collection using Iterator is subject to ConcurrentModificationException if Collection is modified after Iteration started, but this only happens in case of fail-fast Iterators.

4) There are two types of Iterators in Java, fail-fast and fail-safe, check difference between fail-safe and fail-fast Iterator for more details.

5) List collection type also supports ListIterator which has add() method to add elements in collection while Iterating. There are some more differences between Iterator and ListIterator like bidirectional traversing, which we have discussed above. Also why ListIterator has add() method is a popular Java Collection Interview question.

Read more: http://javarevisited.blogspot.com/2011/10/java-iterator-tutorial-example-list.html#ixzz5AlQmGuN0

ConcurrentHashMap vs Hashtable vs Synchronized Map
Though all three collection classes are thread-safe and can be used in multi-threaded, concurrent Java application, there is a significant difference between them, which arise from the fact that how they achieve their thread-safety. Hashtable is a legacy class from JDK 1.1 itself, which uses synchronized methods to achieve thread-safety. All methods of Hashtable are synchronized which makes them quite slow due to contention if a number of thread increases. Synchronized Map is also not very different than Hashtable and provides similar performance in concurrent Java programs. The only difference between Hashtable and Synchronized Map is that later is not a legacy and you can wrap any Map to create it’s synchronized version by using Collections.synchronizedMap() method.

On the other hand, ConcurrentHashMap is specially designed for concurrent use i.e. more than one thread. By default it simultaneously allows 16 threads to read and write from Map without any external synchronization. It is also very scalable because of stripped locking technique used in the internal implementation of ConcurrentHashMap class. Unlike Hashtable and Synchronized Map, it never locks whole Map, instead, it divides the map into segments and locking is done on those. Though it performs better if a number of reader threads are greater than the number of writer threads.

Read more: http://javarevisited.blogspot.com/2011/04/difference-between-concurrenthashmap.html#ixzz5AlROrC5y

Java 5 has added several new Concurrent Collection classes e.g. ConcurrentHashMap, CopyOnWriteArrayList, BlockingQueue etc, which has made Interview questions on Java Collection even trickier. Java Also provided a way to get Synchronized copy of collection e.g. ArrayList, HashMap by using Collections.synchronizedMap() Utility function.One Significant difference is that Concurrent Collections has better performance than synchronized Collection because they lock only a portion of Map to achieve concurrency and Synchronization. See the difference between Synchronized Collection and Concurrent Collection in Java for more details.

Read more: http://javarevisited.blogspot.com/2011/11/collection-interview-questions-answers.html#ixzz5AlRYjAyR

What is the difference between Iterator and Enumeration? (answer)
This is a beginner level collection interview questions and mostly asked during interviews of Junior Java developer up to experience of 2 to 3 years Iterator duplicate functionality of Enumeration with one addition of remove() method and both provide navigation functionally on objects of Collection.Another difference is that Iterator is more safe than Enumeration and doesn’t allow another thread to modify collection object during iteration except remove() method and throws ConcurrentModificaitonException. See Iterator vs Enumeration in Java for more differences.

Read more: http://javarevisited.blogspot.com/2011/11/collection-interview-questions-answers.html#ixzz5AlRmUBGU

The problem is caused by the un-overridden method “hashCode()”. The contract between equals() and hashCode() is:
1) If two objects are equal, then they must have the same hash code.
2) If two objects have the same hash code, they may or may not be equal.

Collections.sort() it gets sorted based on the natural order specified in compareTo() method while Collections.sort(Comparator) will sort objects based on compare() method of Comparator.

Read more: http://javarevisited.blogspot.com/2011/11/collection-interview-questions-answers.html#ixzz5AlSm1num

First difference between Synchronized ArrayList and CopyOnWriteArrayList comes from the fact how they achieve thread safety. Synchronized List locks the whole list to provide synchronization and thread-safety, while CopyOnWriteArrayList doesn’t lock the list and when a Thread writes into the list it simply replace the list by copying. This way it provides concurrent access to the list to multiple threads without locking and since read is a thread-safe operation and no two thread writes into list anytime.

2) The second difference between them comes from the fact how their iterator behave. The Iterator returned from synchronized ArrayList is a fail fast but iterator returned by CopyOnWriteArrayList is a fail-safe iterator i.e. it will not throw ConcurrentModifcationException even when the list is modified when one thread is iterating over it. If you want to learn more about fail safe and fail-fast iterator, I suggest you read my earlier post understanding difference between fail-safe and fail-fast iterator in Java.

3) Third and one of the key difference between CopyOnWriteArrayList and ArrayList is performance, especially if ArrayList is mostly used for read only purpose. CopyOnWriteArrayList will likely to outperform synchronized ArrayList if that’s the case but if its mix of read and write then stick with older one.

Read more: http://www.java67.com/2015/06/difference-between-synchronized-arraylist-and-copyOnWriteArrayList-java.html#ixzz5AlTQ9JIa

Register a Shutdown hook with the JVM
The ConfigurableApplicationContext exposes the close() and registerShutdownHook() methods. You can explicitly call the close() method. Or call the registerShutdownHook() method. Both must be used in a non-web application context. These methods will call the registered destruction methods of singleton beans before the JVM is closed.

package com.memorynotfound.spring.core.lifecycle;

import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class Run {

public static void main(String… args) throws InterruptedException {
ConfigurableApplicationContext context = new ClassPathXmlApplicationContext(“app-config.xml”);
// implicit shutdown hook
context.registerShutdownHook();

// or explicit close method
context.close();
}
}

You can initialize or destroy your singleton bean using the xml configuration attributes init-method and destroy-method respectively. You must specify the name of the method that has a void no-argument signature. These methods will be called upon initialization and destruction of your singleton bean.

https://www.concretepage.com/spring/spring-bean-life-cycle-tutorial

ava ClassLoader loads a java class file into java virtual machine. It is as simple as that. It is not a huge complicated concept to learn and every java developer must know about the java class loaders and how it works.

Like NullPointerException, one exception that is very popular is ClassNotFoundException. At least in your beginner stage you might have got umpteen number of ClassNotFoundException. Java class loader is the culprit that is causing this exception.

Types (Hierarchy) of Java Class Loaders
Java class loaders can be broadly classified into below categories:

Bootstrap Class Loader
Bootstrap class loader loads java’s core classes like java.lang, java.util etc. These are classes that are part of java runtime environment. Bootstrap class loader is native implementation and so they may differ across different JVMs.
Extensions Class Loader
JAVA_HOME/jre/lib/ext contains jar packages that are extensions of standard core java classes. Extensions class loader loads classes from this ext folder. Using the system environment propery java.ext.dirs you can add ‘ext’ folders and jar files to be loaded using extensions class loader.
System Class Loader
Java classes that are available in the java classpath are loaded using System class loader.
You can see more class loaders like java.net.URLClassLoader, java.security.SecureClassLoader etc. Those are all extended from java.lang.ClassLoader

These class loaders have a hierarchical relationship among them. Class loader can load classes from one level above its hierarchy. First level is bootstrap class loader, second level is extensions class loader and third level is system class loader.

Class Self Reference
When a java source file is compiled to a binary class, compiler inserts a field into java class file. It is a public static final field named ‘class’ of type java.lang.Class

So for all java classes you can access it as java.lang.Class classObj = ClassName.class;

Significance of this Class object is it contains a method getClassLoader() which returns the class loader for the class. It will return null it it was loaded by bootstrap class loader.

How a Java Class Loader Works?
When a class name is given, class loader first locates the class and then reads a class file of that name from the native file system. Therefore this loading process is platform dependent.

By default java.lang.ClassLoader is registered as a class loader that is capable of loading classes in parallel. But the subclasses needs to register as parallel or not at the time of instantiation.

Classes can also be loaded from network, constructed on runtime and loaded. ClassLoader class has a method name defineClass which takes input as byte array and loads a class.

Class Loader Parent
All class loaders except bootstrap class loader has a parent class loader. This parent is not as in parent-child relationship of inheritance. Every class loader instance is associated with a parent class loader.

When a class loader is entrusted with the responsibility of loading a class, as a first step it delegates this work to the associated parent class loader. Then this parent class loader gets the instruction and sequentially it delegates the call to its parent class loader. In this chain of hierarchy the bootstrap class loader is at the top.

When a class loader instance is created, using its constructor the parent classloader can be associated with it.

Class Loader Rule 1
A class is loaded only once into the JVM.

In this rule, what is “a class”? Uniqueness of a class is identified along with the ClassLoader instance that loaded this class into the JVM. A class is always identified using its fully qualified name (package.classname). So when a class is loaded into JVM, you have an entry as (package, classname, classloader). Therefore the same class can be loaded twice by two different ClassLoader instances.

I will be writing some more articles on custom class loaders, jar hell and internals of class loading like loading-linking.

This Core Java tutorial was added on 12/02/2012.

ClassNotFoundException vs. NoClassDefFoundError
This quick tutorial will help you learn to distinguish between two similar, but different problems that can crop up in your code.
by Suresh Rajagopal · Nov. 07, 16 · Java Zone · Tutorial
Like (64)

Comment (7)

Save Tweet 62.73k Views
Join the DZone community and get the full member experience. JOIN FOR FREE
Learn how to stop testing everything every sprint and only test the code you’ve changed. Brought to you by Parasoft.
ClassNotFoundException and NoClassDefFoundError occur when a particular class is not found at runtime. However, they occur at different scenarios.

ClassNotFoundException is an exception that occurs when you try to load a class at run time using Class.forName() or loadClass() methods and mentioned classes are not found in the classpath.

NoClassDefFoundError is an error that occurs when a particular class is present at compile time, but was missing at run time.

ClassNotFoundException
ClassNotFoundException is a runtime exception that is thrown when an application tries to load a class at runtime using the Class.forName() or loadClass() or findSystemClass() methods ,and the class with specified name are not found in the classpath. For example, you may have come across this exception when you try to connect to MySQL or Oracle databases and you have not updated the classpath with required JAR files. Most of the time, this exception occurs when you try to run an application without updating the classpath with required JAR files.

For example, the below program will throw ClassNotFoundException if the mentioned class “oracle.jdbc.driver.OracleDriver” is not found in the classpath.

public class MainClass
{
public static void main(String[] args)
{
try
{
Class.forName(“oracle.jdbc.driver.OracleDriver”);
}catch (ClassNotFoundException e)
{
e.printStackTrace();
}
}
}
If you run the above program without updating the classpath with required JAR files, you will get an exception akin to:

java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at pack1.MainClass.main(MainClass.java:17)

NoClassDefFoundError
NoClassDefFoundError is an error that is thrown when the Java Runtime System tries to load the definition of a class, and that class definition is no longer available. The required class definition was present at compile time, but it was missing at runtime. For example, compile the program below.

class A
{
// some code
}
public class B
{
public static void main(String[] args)
{
A a = new A();
}
}
When you compile the above program, two .class files will be generated. One is A.class and another one is B.class. If you remove the A.class file and run the B.class file, Java Runtime System will throw NoClassDefFoundError like below:

Exception in thread “main” java.lang.NoClassDefFoundError: A
at MainClass.main(MainClass.java:10)
Caused by: java.lang.ClassNotFoundException: A
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

Recap
ClassNotFoundException

NoClassDefFoundError

It is an exception. It is of type java.lang.Exception.

It is an error. It is of type java.lang.Error.

It occurs when an application tries to load a class at run time which is not updated in the classpath.

It occurs when java runtime system doesn’t find a class definition, which is present at compile time, but missing at run time.

It is thrown by the application itself. It is thrown by the methods like Class.forName(), loadClass() and findSystemClass().

It is thrown by the Java Runtime System.

It occurs when classpath is not updated with required JAR files.

It occurs when required class definition is missing at runtime.

Spring Autowire by constructor – multiple candidates
With spring autowire by constructor enabled and there are multiple candidates, the container throws a NoUniqueBeanDefinitionException and fails to start.

Exclude bean from Autowiring
You can exclude a bean from autowiring using the autowire-candidate attribute of the element. This makes the bean unavailable to the spring container for autowiring.

@Configuration
@Import({ DbConfig.class })
@ImportResource(“classpath:app-config.xml”)
public class AppConfig {

}

We can simply embed an embeddable class using the @Embedded annotation. This class will be embedded in the same database table as the source. We can optionally override the attributes of the embedded class using the @AttributeOverrides and @AttributeOverride annotation. We first specify – by name – which attribute we wish to override. Afterwards we can override the attribute by using the @Column annotation.

package com.memorynotfound.hibernate;

import javax.persistence.*;
import java.util.Date;

@Entity
public class Person {

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;

private String name;

@Embedded
@AttributeOverrides(value = {
@AttributeOverride(name = “zip”, column = @Column(length = 10)),
@AttributeOverride(name = “city”, column = @Column(nullable = false))
})
private Address address;

public Person() {
}

In this tutorial we demonstrate how to read and write Spring JMS Message Header properties.
We show various ways which you can access header information. We can use the @Header annotation to obtain a single header attribute.
The @Headers annotations can inject all headers inside a Map.
We can also access header information using MessageHeaders and JmsMessageHeaderAccessor classes.

@JmsListener(destination = ORDER_QUEUE)
public void receiveMessage(@Payload Order order,
@Header(JmsHeaders.CORRELATION_ID) String correlationId,
@Header(name = “jms-header-not-exists”, defaultValue = “default”) String nonExistingHeader,
@Headers Map headers,
MessageHeaders messageHeaders,
JmsMessageHeaderAccessor jmsMessageHeaderAccessor) {

Asynchronous service with spring @Async and Java Future
BY MEMORYNOTFOUND · PUBLISHED JULY 10, 2015 · UPDATED MARCH 31, 2016

DISCOVER MORE ARTICLES
Spring boot – Create a Custom Banner Example
Spring Boot Hazelcast Caching Example Configuration
Spring Boot Embedded ActiveMQ Configuration Example
Spring MVC Download File Examples
Spring MVC Internationalization i18n Example
In this tutorial we use Java’s Future callback together with Spring @Async Thread execution. Whenever you want to make a time consuming task the best practice is to start this in a new thread and handle the service asynchronously. In this example we use Spring @Async to fire up a new thread and get the result in a later time.

Dependencies
We use spring @Async for the asynchronous thread callbacks, so we need the following dependencies.

4.0.0
com.memorynotfound.spring.core
async-service
1.0.0-SNAPSHOT
https://memorynotfound.com
SPRING – ${project.artifactId}

4.1.7.RELEASE

org.springframework
spring-core
${spring.version}

org.springframework
spring-context
${spring.version}

Bootstrapping Spring Application Java Configuration
Lets start by bootstrapping our spring application.

Info: Don’t forget to add the @EnableAsync to the class. Like the name indicates: this enables the asynchronous processing. Without this annotation the methods annotated with the @Async will not execute asynchronously.
package com.memorynotfound;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableAsync;

@EnableAsync
@Configuration
@ComponentScan(“com.memorynotfound”)
public class AppConfig {

}
Spring @Async Method invocation using Java’s Future
Here we create a simple long running method which is annotated with spring @Async annotation. This is all it takes, together with the @EnableAsync annotation to enable asynchronous processing with spring. We return a Future which is one of the new libraries of Java 7. A Future represents the result of an asynchronous computation.

package com.memorynotfound;

import org.springframework.scheduling.annotation.Async;
import org.springframework.scheduling.annotation.AsyncResult;
import org.springframework.stereotype.Component;
import java.util.concurrent.Future;

@Component
public class MailSender {

@Async
public Future sendMail() throws InterruptedException {
System.out.println(“sending mail..”);
Thread.sleep(1000 * 10);
System.out.println(“sending mail completed”);
return new AsyncResult(true);
}
}
Execute Spring @Async Methods
Finally lets put it all together and execute an example use case.

package com.memorynotfound;

import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;

import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;

public class Main {

public static void main(String…args) throws InterruptedException, ExecutionException {

ApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class);
MailSender mailSender = context.getBean(MailSender.class);

System.out.println(“about to run”);
Future future = mailSender.sendMail();
System.out.println(“this will run immediately.”);

Boolean result = future.get();

System.out.println(“mail send result: ” + result);
}

package com.example.demo;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableAsync;

@Configuration
@EnableAsync
@ComponentScan(“com.example.demo”)
public class AppConfig {

}
package com.example.demo;

import java.util.concurrent.Future;

import org.springframework.scheduling.annotation.AsyncResult;
import org.springframework.stereotype.Component;

@Component
public class GtedataANdSendEmail {
public Future sendEmail() throws InterruptedException{
System.out.println(“sending mail..”);
Thread.sleep(1000 * 10);
System.out.println(“sending mail completed”);
return new AsyncResult(true);
}
}

Collection vs Arrays:
=====================
Collection is re-sizable or dynamically draw-able memory.
Provides useful data structures in the form of predefined classes that reduces programming affords.
It support to store heterogeneous elements or object.
It provides higher performance.
It provides Extendability (depends on incoming flow of data,if the size of collection framework variable is increasing than the collection framework variable is containing Extendability feature).
It provides adaptability facility( The process of adding the content of one collection framework variable to another collection framework either in the beginning or in the ending or in the middle in known as adaptability).
It is one of the algorithmic oriented.
It provides in-built sorting technique.
It provides in-built searching technique.
It provides higher preliminary concepts of Data Structure such as:- Stack,Queue,LinkedList,Trees ..etc.
Array Collection:
——————-
1 Arrays are fixed in size and hence once we created an array we are not allowed to increase or decrease the size based on our requirement. Collections are grow-able in nature and hence based on our requirement we can increase or decrease the size.
2 Arrays can hold both primitives as well as objects. Collections can hold only objects but not primitive.
3 Performance point of view arrays faster than collection Performance point of view collections are slower than array
4 Arrays can hold only homogeneous elements. Collections can hold both homogeneous and heterogeneous elements.
5 Memory point of view arrays are not recommended to use. Memory point of view collections are recommended to use.
6 For any requirement, there is no ready method available. For every requirement ready made method support is available.

It’s easy if you think of it like this: Collections are better than object arrays in basically every way imaginable.

You should prefer List over Foo[] whenever possible. Consider:

A collection can be mutable or immutable. A nonempty array must always be mutable.
A collection can be thread-safe; even concurrent. An array is never safe to publish to multiple threads.
A collection can allow or disallow null elements. An array must always permit null elements.
A collection is type-safe; an array is not. Because arrays “fake” covariance, ArrayStoreException can result at runtime.
A collection can hold a non-reifiable type (e.g. List<Class> or List<Optional>). With an array you get compilation warnings and confusing runtime exceptions.
A collection has a fully fleshed-out API; an array has only set-at-index, get-at-index and length.
A collection can have views (unmodifiable, subList, filter…). No such luck for an array.
A list or set’s equals, hashCode and toString methods do what users expect; those methods on an array do anything but what you expect — a common source of bugs.
Because of all the reasons above, third-party libraries like Guava won’t bother adding much additional support for arrays, focusing only on collections, so there is a network effect.
Object arrays will never be first-class citizens in Java.

A few of the reasons above are covered in much greater detail in Effective Java, Second Edition, starting at page 119.

So, why would you ever use object arrays?

You have to interact with an API that uses them, and you can’t fix that API
so convert to/from a List as close to that API as you can
You have a reliable benchmark that shows you’re actually getting better performance with them
but benchmarks can lie, and often do
I can’t think of any other reasons

Collection(I) :
===============
Top-most interface of all collections.
No concreate IMPL for this interfaces .
Al collection elements should directly or in-directly IMPL this one.
provides all common methods like add,addAll,contain,remove,toArray,equals etc.
At leaset IMPL classes conatins two constructors like default and one is with collection as a argument.
contains method internally uses equals logic to find the existance. (o==null ? e==null : o.equals(e)).
You can create a collection elemt from another collection elemt by constructor but if un-suppotred/non-null alllowed collection trying
to map then it willl throw NullPointerException or ClassCastException.
Map map=new HashMap();
map.put(“one”, “hdjhfbhd”);
map.put(“two”, null);

trying to insert null into Hashtable
Map hashTable=new Hashtable(map);
///Map hashTable2=new Hashtable(integers);
System.out.println(map);

Exception in thread “main” java.lang.NullPointerException
at java.util.Hashtable.put(Unknown Source)
at java.util.Hashtable.putAll(Unknown Source)
at java.util.Hashtable.(Unknown Source)
at com.me.pack.PrimitiveDraw.main(PrimitiveDraw.java:47)

You can’t insert/pass Map subclasses to Collection subclasses creation bcz they both are fdifferent interfaces.

public interface Collection
extends Iterable
The root interface in the collection hierarchy. A collection represents a group of objects, known as its elements. Some collections allow duplicate elements and others do not. Some are ordered and others unordered. The JDK does not provide any direct implementations of this interface: it provides implementations of more specific subinterfaces like Set and List. This interface is typically used to pass collections around and manipulate them where maximum generality is desired.
Bags or multisets (unordered collections that may contain duplicate elements) should implement this interface directly.

All general-purpose Collection implementation classes (which typically implement Collection indirectly through one of its subinterfaces) should provide two “standard” constructors: a void (no arguments) constructor, which creates an empty collection, and a constructor with a single argument of type Collection, which creates a new collection with the same elements as its argument. In effect, the latter constructor allows the user to copy any collection, producing an equivalent collection of the desired implementation type. There is no way to enforce this convention (as interfaces cannot contain constructors) but all of the general-purpose Collection implementations in the Java platform libraries comply.

The “destructive” methods contained in this interface, that is, the methods that modify the collection on which they operate, are specified to throw UnsupportedOperationException if this collection does not support the operation. If this is the case, these methods may, but are not required to, throw an UnsupportedOperationException if the invocation would have no effect on the collection. For example, invoking the addAll(Collection) method on an unmodifiable collection may, but is not required to, throw the exception if the collection to be added is empty.

Some collection implementations have restrictions on the elements that they may contain. For example, some implementations prohibit null elements, and some have restrictions on the types of their elements. Attempting to add an ineligible element throws an unchecked exception, typically NullPointerException or ClassCastException. Attempting to query the presence of an ineligible element may throw an exception, or it may simply return false; some implementations will exhibit the former behavior and some will exhibit the latter. More generally, attempting an operation on an ineligible element whose completion would not result in the insertion of an ineligible element into the collection may throw an exception or it may succeed, at the option of the implementation. Such exceptions are marked as “optional” in the specification for this interface.

It is up to each collection to determine its own synchronization policy. In the absence of a stronger guarantee by the implementation, undefined behavior may result from the invocation of any method on a collection that is being mutated by another thread; this includes direct invocations, passing the collection to a method that might perform invocations, and using an existing iterator to examine the collection.

Many methods in Collections Framework interfaces are defined in terms of the equals method. For example, the specification for the contains(Object o) method says: “returns true if and only if this collection contains at least one element e such that (o==null ? e==null : o.equals(e)).” This specification should not be construed to imply that invoking Collection.contains with a non-null argument o will cause o.equals(e) to be invoked for any element e. Implementations are free to implement optimizations whereby the equals invocation is avoided, for example, by first comparing the hash codes of the two elements. (The Object.hashCode() specification guarantees that two objects with unequal hash codes cannot be equal.) More generally, implementations of the various Collections Framework interfaces are free to take advantage of the specified behavior of underlying Object methods wherever the implementor deems it appropriate.

Some collection operations which perform recursive traversal of the collection may fail with an exception for self-referential instances where the collection directly or indirectly contains itself. This includes the clone(), equals(), hashCode() and toString() methods. Implementations may optionally handle the self-referential scenario, however most current implementations do not do so.

This interface is a member of the Java Collections Framework.

Implementation Requirements:
The default method implementations (inherited or otherwise) do not apply any synchronization protocol. If a Collection implementation
has a specific synchronization protocol, then it must override default implementations to apply that protocol.

Collection(I) methods:
======================
retainAll():
———–
List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
List seclist=new ArrayList(Arrays.asList(52,25,65));

System.out.println(firstlist.retainAll(seclist));

System.out.println(firstlist);
System.out.println(seclist);
[52, 25, 25, 65]
[52, 25, 65]

List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
List seclist=new ArrayList(Arrays.asList(522565));

System.out.println(firstlist.retainAll(seclist));

System.out.println(firstlist);
System.out.println(seclist);
true
[]
[522565]

It returns matching elemnts in both and remining 1st collection elemtns will get removed. No effect on 2nd collection.
If no matching element then first collection is empty and 2nd is as it is.

removeIf(Predicate..):
———–
List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
List seclist=new ArrayList(Arrays.asList(522565));

//System.out.println(firstlist.retainAll(seclist));

System.out.println(firstlist);
System.out.println(seclist);

firstlist.removeIf(i ->i>3000);
System.out.println(firstlist);

[52, 25, 36, 25, 3696, 155, 65]
[522565]
[52, 25, 36, 25, 155, 65]

toArray() methods:
——————
List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
It will an array of type Object.
Object[] objects=firstlist.toArray();
System.out.println(objects);

to convert to Specific object type:
System.out.println(firstlist.size());
Integer[] integerArray=new Integer[firstlist.size()];
firstlist.toArray(integerArray);

for(Integer y: integerArray) {
System.out.println(y);
}
System.out.println(integerArray.length);
6
52
25
36
25
155
65
6

if output aray is length is less than size of collection then null will printed in array.
null
null
null
if output array length is more than collection then length + elemtns will be null.
155
65
null
null
null
null
[Ljava.lang.Integer;@e9e54c2
10

Size() => ements in this collection. If this collection contains more than Integer.MAX_VALUE elements, returns Integer.MAX_VALUE.
isEmpty() => if there are no elements ,truye
contains() => uses equals internally.
Returns true if this collection contains the specified element.
More formally, returns true if and only if this collection contains at least one element e such that (o==null ? e==null : o.equals(e)).
iterator() => Used for traversing.
System.out.println(“==============”);
Iterator iterator=firstlist.iterator();
while(iterator.hasNext()) {
System.out.println(iterator.next());
}

ListIterator listIterator=firstlist.listIterator(firstlist.size());
while(listIterator.hasPrevious()) {
System.out.println(listIterator.previous());
}

Enumeration enumeration=Collections.enumeration(firstlist);
while(enumeration.hasMoreElements()) {
System.out.println(enumeration.nextElement());
}

add() =>
UnsupportedOperationException – if the add operation is not supported by this collection
ClassCastException – if the class of the specified element prevents it from being added to this collection
NullPointerException – if the specified element is null and this collection does not permit null elements
IllegalArgumentException – if some property of the element prevents it from being added to this collection
IllegalStateException – if the element cannot be added at this time due to insertion restrictions

remove() => (o==null ? e==null : o.equals(e)) check delete.
addAll(collection c)
removeAll(Collection c)
containsAll(collection c)

clear() => remove all elemnts
Removes all of the elements from this collection (optional operation). The collection will be empty after this method returns.

stream() =>
parallelStream() =>
firstlist.stream().collect(Collectors.toList());

splitIterator() =>

The Collection interface contains methods that perform basic operations, such as int size(), boolean isEmpty(), boolean contains(Object element), boolean add(E element), boolean remove(Object element), and Iterator iterator().

It also contains methods that operate on entire collections, such as boolean containsAll(Collection c), boolean addAll(Collection c), boolean removeAll(Collection c), boolean retainAll(Collection c), and void clear().

Additional methods for array operations (such as Object[] toArray() and T[] toArray(T[] a) exist as well.
In JDK 8 and later, the Collection interface also exposes methods Stream stream() and Stream parallelStream(), for obtaining sequential or parallel streams from the underlying collection. (See the lesson entitled Aggregate Operations for more information about using streams.)

The Collection interface does about what you’d expect given that a Collection represents a group of objects. It has methods that tell you how many elements are in the collection (size, isEmpty), methods that check whether a given object is in the collection (contains), methods that add and remove an element from the collection (add, remove), and methods that provide an iterator over the collection (iterator).

The add method is defined generally enough so that it makes sense for collections that allow duplicates as well as those that don’t. It guarantees that the Collection will contain the specified element after the call completes, and returns true if the Collection changes as a result of the call. Similarly, the remove method is designed to remove a single instance of the specified element from the Collection,
assuming that it contains the element to start with, and to return true if the Collection was modified as a result.

These are but a few examples of what you can do with streams and aggregate operations. For more information and examples, see the lesson entitled Aggregate Operations.

The Collections framework has always provided a number of so-called “bulk operations” as part of its API.
These include methods that operate on entire collections, such as containsAll, addAll, removeAll, etc.
Do not confuse those methods with the aggregate operations that were introduced in JDK 8. The key difference
between the new aggregate operations and the existing bulk operations (containsAll, addAll, etc.) is that the old versions are all
mutative, meaning that they all modify the underlying collection. In contrast, the new aggregate operations do not modify the underlying collection.
When using the new aggregate operations and lambda expressions, you must take care to avoid mutation so as not to introduce problems in the future,
should your code be run later from a parallel stream.

List(I):
========
A List is an ordered Collection (sometimes called a sequence). Lists may contain duplicate elements. In addition to the operations inherited from Collection, the List interface includes operations for the following:

Positional access — manipulates elements based on their numerical position in the list. This includes methods such as get, set, add, addAll, and remove.
Search — searches for a specified object in the list and returns its numerical position. Search methods include indexOf and lastIndexOf.
Iteration — extends Iterator semantics to take advantage of the list’s sequential nature. The listIterator methods provide this behavior.
Range-view — The sublist method performs arbitrary range operations on the list.

remove(0 and removeAll(0 ==> will removes the elemnt from starting onwards.
add,addAll => adds the elemnt at end of the collection.

list1.addAll(list2);
Here’s a nondestructive form of this idiom, which produces a third List consisting of the second list appended to the first.

List list3 = new ArrayList(list1);
list3.addAll(list2);
list 3 is having list1+LIST2

a. remove(int index) : Accept index of object to be removed.
b. remove(Obejct obj) : Accept object to be removed.
REMOVE(1)
remove(new Integer(123));

compare the List(I) elemnts:
—————————
Like the Set interface, List strengthens the requirements on the equals and hashCode methods so that
two List objects can be compared for logical equality without regard to their implementation classes.
Two List objects are equal if they contain the same elements in the same order.

ArrayList integers=new ArrayList(Arrays.asList(12,56,225));
ArrayList integers2=new ArrayList(Arrays.asList(12,56,225));
boolean b = integers2.equals(integers);
System.out.println(b);
—>true

ArrayList integers=new ArrayList(Arrays.asList(12,56,225));
ArrayList integers2=new ArrayList(Arrays.asList(12,225,56));
boolean b = integers2.equals(integers);
System.out.println(b);
–>false

Random class in java:
=====================
int a=(int)new Random().nextInt(2);
System.out.println(a);

if nextInt(0) then error =>
Exception in thread “main” java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Unknown Source)
at com.me.pack.DrawShapes.main(DrawShapes.java:73)

jJava 7 operator:
====================

//only Integers
List integers3=new ArrayList();
//Only Integers as we are limiting ourselves to Integer.
List integers4=new ArrayList();
//Any values including object
List integers5=new ArrayList();
//Any values including object
List integers6=new ArrayList();
/*
* CE
*
* List integers7=new ArrayList();
List integers8=new ArrayList();
*/

integers6.add(new Object());
integers6.add(new Integer(123));

integers5.add(new Object());
integers5.add(new Integer(123));

//integers4.add(new Object());
integers4.add(new Integer(123));

//integers3.add(new Object());
integers.add(new Integer(123));
System.out.println(integers6);
System.out.println(integers5);

java8 -Streams,Piplelines iterators vs foreach – A good tuts:
=============================================================
Lesson: Aggregate Operations
Note: To better understand the concepts in this section, review the sections Lambda Expressions and Method References.

For what do you use collections? You don’t simply store objects in a collection and leave them there. In most cases, you use collections to retrieve items stored in them.

Consider again the scenario described in the section Lambda Expressions. Suppose that you are creating a social networking application. You want to create a feature that enables an administrator to perform any kind of action, such as sending a message, on members of the social networking application that satisfy certain criteria.

As before, suppose that members of this social networking application are represented by the following Person class:

public class Person {

public enum Sex {
MALE, FEMALE
}

String name;
LocalDate birthday;
Sex gender;
String emailAddress;

// …

public int getAge() {
// …
}

public String getName() {
// …
}
}
The following example prints the name of all members contained in the collection roster with a for-each loop:

for (Person p : roster) {
System.out.println(p.getName());
}
The following example prints all members contained in the collection roster but with the aggregate operation forEach:

roster
.stream()
.forEach(e -> System.out.println(e.getName());
Although, in this example, the version that uses aggregate operations is longer than the one that uses a for-each loop, you will see that versions that use bulk-data operations will be more concise for more complex tasks.

The following topics are covered:

Pipelines and Streams
Differences Between Aggregate Operations and Iterators
Find the code excerpts described in this section in the example BulkDataOperationsExamples.

Pipelines and Streams
A pipeline is a sequence of aggregate operations. The following example prints the male members contained in the collection roster with a pipeline that consists of the aggregate operations filter and forEach:

roster
.stream()
.filter(e -> e.getGender() == Person.Sex.MALE)
.forEach(e -> System.out.println(e.getName()));
Compare this example to the following that prints the male members contained in the collection roster with a for-each loop:

for (Person p : roster) {
if (p.getGender() == Person.Sex.MALE) {
System.out.println(p.getName());
}
}
A pipeline contains the following components:

A source: This could be a collection, an array, a generator function, or an I/O channel. In this example, the source is the collection roster.

Zero or more intermediate operations. An intermediate operation, such as filter, produces a new stream.

A stream is a sequence of elements. Unlike a collection, it is not a data structure that stores elements. Instead, a stream carries values from a source through a pipeline. This example creates a stream from the collection roster by invoking the method stream.

The filter operation returns a new stream that contains elements that match its predicate (this operation’s parameter). In this example, the predicate is the lambda expression e -> e.getGender() == Person.Sex.MALE. It returns the boolean value true if the gender field of object e has the value Person.Sex.MALE. Consequently, the filter operation in this example returns a stream that contains all male members in the collection roster.

A terminal operation. A terminal operation, such as forEach, produces a non-stream result, such as a primitive value (like a double value), a collection, or in the case of forEach, no value at all. In this example, the parameter of the forEach operation is the lambda expression e -> System.out.println(e.getName()), which invokes the method getName on the object e. (The Java runtime and compiler infer that the type of the object e is Person.)

The following example calculates the average age of all male members contained in the collection roster with a pipeline that consists of the aggregate operations filter, mapToInt, and average:

double average = roster
.stream()
.filter(p -> p.getGender() == Person.Sex.MALE)
.mapToInt(Person::getAge)
.average()
.getAsDouble();
The mapToInt operation returns a new stream of type IntStream (which is a stream that contains only integer values). The operation applies the function specified in its parameter to each element in a particular stream. In this example, the function is Person::getAge, which is a method reference that returns the age of the member. (Alternatively, you could use the lambda expression e -> e.getAge().) Consequently, the mapToInt operation in this example returns a stream that contains the ages of all male members in the collection roster.

The average operation calculates the average value of the elements contained in a stream of type IntStream. It returns an object of type OptionalDouble. If the stream contains no elements, then the average operation returns an empty instance of OptionalDouble, and invoking the method getAsDouble throws a NoSuchElementException. The JDK contains many terminal operations such as average that return one value by combining the contents of a stream. These operations are called reduction operations; see the section Reduction for more information.

Differences Between Aggregate Operations and Iterators
Aggregate operations, like forEach, appear to be like iterators. However, they have several fundamental differences:

They use internal iteration: Aggregate operations do not contain a method like next to instruct them to process the next element of the collection. With internal delegation, your application determines what collection it iterates, but the JDK determines how to iterate the collection. With external iteration, your application determines both what collection it iterates and how it iterates it. However, external iteration can only iterate over the elements of a collection sequentially. Internal iteration does not have this limitation. It can more easily take advantage of parallel computing, which involves dividing a problem into subproblems, solving those problems simultaneously, and then combining the results of the solutions to the subproblems. See the section Parallelism for more information.

They process elements from a stream: Aggregate operations process elements from a stream, not directly from a collection. Consequently, they are also called stream operations.

They support behavior as parameters: You can specify lambda expressions as parameters for most aggregate operations. This enables you to customize the behavior of a particular aggregate operation.

IllegalArgumentException cases:
===============================
Any API should check the validity of the every parameter of any public method before executing it:

void setPercentage(int pct, AnObject object) {
if( pct 100) {
throw new IllegalArgumentException(“pct has an invalid value”);
}
if (object == null) {
throw new IllegalArgumentException(“object is null”);
}
}

The api doc for IllegalArgumentException is:

Thrown to indicate that a method has been passed an illegal or inappropriate argument.

From looking at how it is used in the jdk libraries, I would say:

It seems like a defensive measure to complain about obviously bad input before the input can get into the works and cause something to fail halfway through with a nonsensical error message.

It’s used for cases where it would be too annoying to throw a checked exception (although it makes an appearance in the java.lang.reflect code, where concern about ridiculous levels of checked-exception-throwing is not otherwise apparent).

I would use IllegalArgumentException to do last-ditch defensive argument-checking for common utilities (trying to stay consistent with the jdk usage), where the expectation is that a bad argument is a programmer error, similar to an NPE. I wouldn’t use it to implement validation in business code.
I certainly wouldn’t use it for the email example.

Modifiable Collections vs Un-Modifiable collections:
=====================================================
Collection.unModifiableCollecton,List,Set,Map,NavigableMap and etc.
The Java Collections Framework provides an easy and simple way to create unmodifiable lists, sets and maps from existing ones, using the Collections‘ unmodifiableXXX() methods (Java Doc). In this article we will discuss how this works and will demonstrate common pitfalls when using such methods.

All code listed below is available at: https://github.com/javacreed/modifying-an-unmodifiable-list, under the collections project. Most of the examples will not contain the whole code and may omit fragments which are not relevant to the example being discussed. The readers can download or view all code from the above link.

Unmodifiable List
Consider the following simple example.

final List modifiable = new ArrayList();
modifiable.add(“Java”);
modifiable.add(“is”);

final List unmodifiable = Collections.unmodifiableList(modifiable);
System.out.println(“Before modification: ” + unmodifiable);

modifiable.add(“the”);
modifiable.add(“best”);

System.out.println(“After modification: ” + unmodifiable);
Here we have a list named: modifiable, from which we create an unmodifiable version using the Collections‘ method unmodifiableList(), named: unmodifiable. As their names imply, one list is modifiable (we can add and remove elements) while the other is read-only.

When executed, the above code will print the following to the command prompt.
When executed, the above code will print the following to the command prompt.

Before modification: [Java, is]
After modification: [Java, is, the, best]
If this was a surprise or an unexpected answer, then you should continue reading this article to find out why this was produced.

Why was the list unmodifiable modified?

Let us first have a look at the Java Doc of the Collections‘ unmodifiableList() method.

public static List unmodifiableList(List list)

Returns an unmodifiable view of the specified list. This method allows modules to provide users with “read-only” access to internal lists. Query operations on the returned list “read through” to the specified list, and attempts to modify the returned list, whether direct or via its iterator, result in an UnsupportedOperationException.

The returned list will be serializable if the specified list is serializable. Similarly, the returned list will implement RandomAccess if the specified list does.

Parameters:
list – the list for which an unmodifiable view is to be returned.

Returns:
an unmodifiable view of the specified list.

The documentation mentions that the returned object (the unmodifiable list, referred to as returned list in this paragraph) is a read-only view of the given one (referred to in this paragraph as the given list). It says nothing about the behaviour we just saw. Well now we know that while we cannot modify the returned list, any changes to the given list are observed by the returned one.
How to prevent modifications to the unmodifiable list?
The solution to this problem is quite simple and is highlighted in the following code.

final List modifiable = new ArrayList();
modifiable.add(“Java”);
modifiable.add(“is”);

// Here we are creating a new array list
final List unmodifiable = Collections.unmodifiableList(new ArrayList(modifiable));
System.out.println(“Before modification: ” + unmodifiable);

modifiable.add(“the”);
modifiable.add(“best”);

System.out.println(“After modification: ” + unmodifiable);
Note that in this example, we are not passing the modifiable list object, but a new list created from this one. When we create a list like this, the elements of the given list are simply copied to the one being created. Therefore, these two lists (the modifiable list and the one just created: new ArrayList(modifiable) list) are disconnected and they only share the elements. Any modifications to each list will not affect the other.

One has to be aware that if we modify any of the elements within any of the lists, then all variables referring to the modified object will observe the modification. This is not the case here as String is an immutable object and thus cannot be changed once created. But if the objects contained by these two lists where mutable, then any changes to the objects from one of the list will be observed in the peer element in the other list.

The above code will produce the following result

Before modification: [Java, is]
After modification: [Java, is]

Iterator vs ListIterator and Enumeration:
=========================================
System.out.println(“==============”);
Iterator iterator=firstlist.iterator();
while(iterator.hasNext()) {
System.out.println(iterator.next());
}

ListIterator listIterator=firstlist.listIterator(firstlist.size());
while(listIterator.hasPrevious()) {
System.out.println(listIterator.previous());
}

Enumeration enumeration=Collections.enumeration(firstlist);
while(enumeration.hasMoreElements()) {
System.out.println(enumeration.nextElement());
}

Show sql queries of hiobenrate in sprng boot:
=============================================
#show sql statement
logging.level.org.hibernate.SQL=debug

#show sql values
logging.level.org.hibernate.type.descriptor.sql=trace

Uber vs Flat jar:
==================
There is no difference whatsoever. These terms are all synonyms of each other.

The term “uber-jar” may be more commonly used in documentations (take the maven-shade-plugin documentation for example) but “fat-jar” is also widely used.

Über is the German word for above or over, as in a line from a previous national anthem: Deutschland, Deutschland, über alles (Germany, Germany above all else).

Hence, in this context, an uber-jar is an “over-jar”, one level up from a simple “jar”, defined as one that contains both your package and all its dependencies in one single JAR file. The name can be thought to come from the same stable as ultrageek, superman, hyperspace, and metadata, which all have similar meanings of “beyond the normal”.

The advantage is that you can distribute your uber-jar and not care at all whether or not dependencies are installed at the destination, as your uber-jar actually has no dependencies.

All the dependencies of your own stuff within the uber-jar are also within that uber-jar. As are all dependencies of those dependencies. And so on.
The fat jar is the jar, which contains classes from all the libraries, on which your project depends and, of course, the classes of current project.

You can create a uber jar by using maven-shade plugin;

org.apache.maven.plugins
maven-shade-plugin
2.4.3

package

shade

com.howtodoinjava.demo.App
1.0

spring boot+rest :
==================

I have selected dependencies like Jersey, Spring Web, Spring HATEOAS, Spring JPA and Spring Security etc. You can add more dependencies after you have downloaded and imported the project or in future when requirements arise.
Convetntion over configuration:
——————————-
Spring Boot uses convention over configuration by scanning the dependent libraries available in the class path. For each spring-boot-starter-* dependency in the POM file, Spring Boot executes a default AutoConfiguration class. AutoConfiguration classes use the *AutoConfiguration lexical pattern, where * represents the library. For example, the autoconfiguration of spring security is done through SecurityAutoConfiguration.

At the same time, if you don’t want to use auto configuration for any project, it makes it very simple. Just use exclude = SecurityAutoConfiguration.class like below.
@SpringBootApplication(exclude=SecurityAutoConfiguration.class)
-> exludes it.

@SpringBootApplication:
========================
@SpringBootApplication Annotation
SpringBootApplication is defined as below:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Inherited
@SpringBootConfiguration
@EnableAutoConfiguration
@ComponentScan(excludeFilters = @Filter(type = FilterType.CUSTOM, classes = TypeExcludeFilter.class))
public @interface SpringBootApplication
{
//more code
}

@ComponerntScan
@SpringBootConfiguration
@EnableAutoConfiguration
@SpringBootConfiguration
@Configuration
public @interface SpringBootConfiguration
{
//more code
}
This annotation adds @Configuration annotation to class which mark the class a source of bean definitions for the application context.

@EnableAutoConfiguration
This tells spring boot to auto configure important bean definitions based on added dependencies in pom.xml by start adding beans based on classpath settings, other beans, and various property settings.

@ComponentScan
This annotation tells spring boot to scan base package, find other beans/components and configure them as well.

Actuators:
==========
In this Spring boot actuator tutorial, learn about in-built HTTP endpoints available for any boot application for different
monitoring and management purposes. Before spring framework, if we had to introduce this type of monitoring functionality in our
applications then we had to manually develop all those components and that too were very specific to our need. But with spring boot
we have Actuator module which makes it very easy.

/env Returns list of properties in current environment
/health Returns application health information.
/auditevents Returns all auto-configuration candidates and the reason why they ‘were’ or ‘were not’ applied.
/beans Returns a complete list of all the Spring beans in your application.
/trace Returns trace logs (by default the last 100 HTTP requests).
/dump It performs a thread dump.

Actuator Security with WebSecurityConfigurerAdapter
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@Configuration
public class SpringSecurityConfig extends WebSecurityConfigurerAdapter {

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser(“admin”).password(“admin”).roles(“ADMIN”);

}
}
CORS support
CORS support is disabled by default and is only enabled once the endpoints.cors.allowed-origins property has been set.

endpoints.cors.allowed-origins = http://example.com
endpoints.cors.allowed-methods = GET,POST

CommandLiner in sprinjg boot:
=============================
Spring boot’s CommandLineRunner interface is used to run a code block only once in application’s lifetime – after application is initialized.
@Component
class ApplicationStartupRunner implements CommandLineRunner {
protected final Log logger = LogFactory.getLog(getClass());

@Override
public void run(String… args) throws Exception {
logger.info(“ApplicationStartupRunner run method Started !!”);
}
}

If yopu have multiple commandLineRunner Implementationpackage com.example.springbootmanagementexample;

import org.springframework.boot.CommandLineRunner;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;

public class ApplicationStartupRunner implements CommandLineRunner {
// protected final Log logger = LogFactory.getLog(getClass());
@Override
public void run(String… args) throws Exception {
System.out.println(“Application Started !!”);
}
}

@Component
@Order(value=2)
class ApplicationStartupRunner2 implements CommandLineRunner {
// protected final Log logger = LogFactory.getLog(getClass());
@Override
public void run(String… args) throws Exception {
System.out.println(“Application Started 2!!”);
}
}

@Component
@Order(value=3)
class ApplicationStartupRunner3 implements CommandLineRunner {
// protected final Log logger = LogFactory.getLog(getClass());
@Override
public void run(String… args) throws Exception {
System.out.println(“Application Started23 !!”);
}
}

change the embedded server from tomcat to jetty:
================================================

org.springframework.boot
spring-boot-starter-web

org.springframework.boot
spring-boot-starter-tomcat

org.springframework.boot
spring-boot-starter-jetty

server.port=8080
server.servlet.context-path=/home

####Jetty specific properties########

server.jetty.acceptors= # Number of acceptor threads to use.
server.jetty.max-http-post-size=0 # Maximum size in bytes of the HTTP post or put content.
server.jetty.selectors= # Number of selector threads to use.
Also, you may configure these options programatically using JettyEmbeddedServletContainerFactory bean.

@Bean
public JettyEmbeddedServletContainerFactory jettyEmbeddedServletContainerFactory() {
JettyEmbeddedServletContainerFactory jettyContainer =
new JettyEmbeddedServletContainerFactory();

jettyContainer.setPort(9000);
jettyContainer.setContextPath(“/home”);
return jettyContainer;
}

port change:
=================
1) Change default server port from application.properties file
You can do lots of wonderful things by simply making few entries in application.properties file in any spring boot application. Changing server port is one of them.

### Default server port #########
server.port=9000
2) Implement EmbeddedServletContainerCustomizer interface
EmbeddedServletContainerCustomizer interface is used for customizing auto-configured embedded servlet containers. Any beans of this type will get a callback with the container factory before the container itself is started, so you can set the port, address, error pages etc.

import org.springframework.boot.context.embedded.ConfigurableEmbeddedServletContainer;
import org.springframework.boot.context.embedded.EmbeddedServletContainerCustomizer;
import org.springframework.stereotype.Component;

@Component
public class AppContainerCustomizer implements EmbeddedServletContainerCustomizer {

@Override
public void customize(ConfigurableEmbeddedServletContainer container) {

container.setPort(9000);

}
}
3) Change server port from command line
If you application is built as uber jar, you may consider this option as well.

java -jar -Dserver.port=9000 spring-boot-demo.jar

change Application context path:
——————————–
@Component
class AppContainerCustomizer implements EmbeddedServletContainerCustomizer {

@Override
public void customize(ConfigurableEmbeddedServletContainer container) {

container.setPort(9010);
//you can set context path

}

http://localhost:PORT/

main app:
package com.example.springbootmanagementexample;

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory;
import org.springframework.boot.web.support.SpringBootServletInitializer;
import org.springframework.context.annotation.Bean;

@SpringBootApplication
public class SpringBootManagementExampleApplication extends
SpringBootServletInitializer implements CommandLineRunner {

public static void main(String[] args) {
SpringApplication.run(SpringBootManagementExampleApplication.class, args);
}

@Override
public void run(String… arg0) throws Exception {
// TODO Auto-generated method stub
System.out.println(“I ran…………..”);
}

@Bean
public ApplicationStartupRunner runner() {
return new ApplicationStartupRunner();
}

@Bean
public ApplicationStartupRunner2 runner2() {
return new ApplicationStartupRunner2();
}
@Bean
public ApplicationStartupRunner3 runner3() {
return new ApplicationStartupRunner3();
}

@Bean
public JettyEmbeddedServletContainerFactory getIt() {
JettyEmbeddedServletContainerFactory containerFactory=
new JettyEmbeddedServletContainerFactory();
containerFactory.setPort(9000);
containerFactory.setContextPath(“/home”);
return containerFactory;

}

@Bean
public AppContainerCustomizer getit() {
return new AppContainerCustomizer();

}
}

http => Https:
==============
keytool -genkey -alias selfsigned_localhost_sslserver -keyalg RSA -keysize 2048 -validity 700 -keypass changeit -storepass changeit -keystore ssl-server.jks

Spring boot SSL Configuration
First we need to copy the generated keystore file (ssl-server.jks) into the resources folder and then open the application.properties and add the below entries.

server.port=8443
server.ssl.key-alias=selfsigned_localhost_sslserver
server.ssl.key-password=changeit
server.ssl.key-store=classpath:ssl-server.jks
server.ssl.key-store-provider=SUN
server.ssl.key-store-type=JKS
That’s all we need to enable https. It’s pretty easy, right? Thanks to spring boot for making everything possible very easily.

Redirect HTTP requests to HTTPS
This is an optional step in case you want to redirect your HTTP traffic to HTTPS, so that the full site becomes secured. To do that in spring boot, we need to add HTTP connector at 8080 port and then we need to set redirect port 8443. So that any request in 8080 through http, it would be automatically redirected to 8443 and https.

To do that you just need to add below configuration.

@Bean
public EmbeddedServletContainerFactory servletContainer() {
TomcatEmbeddedServletContainerFactory tomcat = new TomcatEmbeddedServletContainerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint(“CONFIDENTIAL”);
SecurityCollection collection = new SecurityCollection();
collection.addPattern(“/*”);
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};

tomcat.addAdditionalTomcatConnectors(redirectConnector());
return tomcat;
}

private Connector redirectConnector() {
Connector connector = new Connector(“org.apache.coyote.http11.Http11NioProtocol”);
connector.setScheme(“http”);
connector.setPort(8080);
connector.setSecure(false);
connector.setRedirectPort(8443);

return connector;
}

getAll beans:
===============
@Autowired
private ApplicationContext appContext;

@Override
public void run(String… args) throws Exception
{
String[] beans = appContext.getBeanDefinitionNames();
Arrays.sort(beans);
for (String bean : beans)
{
System.out.println(bean + ” of Type :: ” + appContext.getBean(bean).getClass());
}
}

PropertyEditor:
===============
Spring has a number of built-in PropertyEditors in the org.springframework.beans.propertyeditors package e.g. for Boolean, Currency, and URL. Some of these editors are registered by default, while some you need to when required.

You can also create custom PropertyEditor in case – default property editors do not serve the purpose. Let’s say we are creating an application for books management. Now people can search the books by ISBN as well. Also, you will need to display isbn details in webpage.

@RequestMapping(value = “/books/{isbn}”, method = RequestMethod.GET)
public String getBook(@PathVariable Isbn isbn, Map model)
{
LOGGER.info(“You searched for book with ISBN :: ” + isbn.getIsbn());
model.put(“isbn”, isbn);
return “index”;
}

@InitBinder
public void initBinder(WebDataBinder binder) {
binder.registerCustomEditor(Isbn.class, new IsbnEditor());
}

these are not thred safe.

Scheduling in spring boot:
==========================
Add @EnableScheduling annotation to your spring boot application class. @EnableScheduling is a Spring Context module annotation. It internally imports the SchedulingConfiguration via the @Import(SchedulingConfiguration.class) instruction

@SpringBootApplication
@EnableScheduling
public class SpringBootWebApplication {

}

Encountered invalid @Scheduled method ‘run’: Only no-arg methods.
4:59:06.227 INFO 6288 — [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped “{[/error]}” onto public org.springframework.http.ResponseEntity<java.util.Map> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2018-03-11 14:59:06.227 INFO 6288 — [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped “{[/error],produces=[text/html]}” onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-03-11 14:59:06.265 INFO 6288 — [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-03-11 14:59:06.265 INFO 6288 — [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-03-11 14:59:06.280 INFO 6288 — [ main] .m.m.a.ExceptionHandlerExceptionResolver : Detected @ExceptionHandler methods in repositoryRestExceptionHandler
2018-03-11 14:59:06.369 INFO 6288 — [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-03-11 14:59:06.872 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@197d671: startup date [Sun Mar 11 14:59:01 IST 2018]; root of context hierarchy
2018-03-11 14:59:06.885 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.Resource> org.springframework.data.rest.webmvc.RepositoryEntityController.getItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.886 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[PUT],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.putItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.886 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.887 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.headCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.887 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.hateoas.Resources org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.887 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}” onto public org.springframework.hateoas.Resources org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResourceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.888 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.headForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.888 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[POST],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.postCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.889 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.889 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[PATCH],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.patchItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException,org.springframework.data.rest.webmvc.ResourceNotFoundException
2018-03-11 14:59:06.890 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[DELETE],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.deleteItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.support.ETag) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.892 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/ || ],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositoryController.listRepositories()
2018-03-11 14:59:06.893 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/ || ],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryController.headForRepositories()
2018-03-11 14:59:06.893 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/ || ],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositoryController.optionsForRepositories()
2018-03-11 14:59:06.895 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReferenceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-03-11 14:59:06.895 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[PATCH || PUT || POST],consumes=[application/json || application/x-spring-data-compact+json || text/uri-list],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.createPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpMethod,org.springframework.hateoas.Resources,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-03-11 14:59:06.896 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}/{propertyId}],methods=[DELETE],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReferenceId(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String) throws java.lang.Exception
2018-03-11 14:59:06.896 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[DELETE],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-03-11 14:59:06.897 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}/{propertyId}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-03-11 14:59:06.897 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-03-11 14:59:06.899 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositorySearchController.headForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-03-11 14:59:06.899 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-03-11 14:59:06.900 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositorySearchController.headForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.901 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.data.rest.webmvc.RepositorySearchesResource org.springframework.data.rest.webmvc.RepositorySearchController.listSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.901 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositorySearchController.executeSearch(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.util.MultiValueMap,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders)
2018-03-11 14:59:06.902 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[GET],produces=[application/x-spring-data-compact+json]}” onto public org.springframework.hateoas.ResourceSupport org.springframework.data.rest.webmvc.RepositorySearchController.executeSearchCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpHeaders,org.springframework.util.MultiValueMap,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler)
2018-03-11 14:59:06.902 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.905 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile/{repository}],methods=[GET],produces=[application/schema+json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositorySchemaController.schema(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.906 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile],methods=[OPTIONS]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.ProfileController.profileOptions()
2018-03-11 14:59:06.906 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile],methods=[GET]}” onto org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.ProfileController.listAllFormsOfMetadata()
2018-03-11 14:59:06.908 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile/{repository}],methods=[GET],produces=[application/alps+json || */*]}” onto org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.alps.AlpsController.descriptor(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.908 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile/{repository}],methods=[OPTIONS],produces=[application/alps+json]}” onto org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.alps.AlpsController.alpsOptions()
2018-03-11 14:59:06.965 INFO 6288 — [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped “{[/actuator/health],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}” onto public java.lang.Object org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(javax.servlet.http.HttpServletRequest,java.util.Map)
2018-03-11 14:59:06.967 INFO 6288 — [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped “{[/actuator/info],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}” onto public java.lang.Object org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(javax.servlet.http.HttpServletRequest,java.util.Map)
2018-03-11 14:59:06.968 INFO 6288 — [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped “{[/actuator],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}” onto protected java.util.Map<java.lang.String, java.util.Map> org.springframework.boot.actuate.endpoint.web.servlet.WebMvcEndpointHandlerMapping.links(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-03-11 14:59:07.062 INFO 6288 — [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-03-11 14:59:07.087 INFO 6288 — [ main] s.a.ScheduledAnnotationBeanPostProcessor : No TaskScheduler/ScheduledExecutorService bean found for scheduled processing
2018-03-11 14:59:07.128 INFO 6288 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ”
2018-03-11 14:59:07.146 INFO 6288 — [ main] com.me.pack.MetricsActuatorApplication : Started MetricsActuatorApplication in 6.104 seconds (JVM running for 6.918)
Sun Mar 11 14:59:07 IST 2018
Sun Mar 11 14:59:08 IST 2018
Sun Mar 11 14:59:08 IST 2018
Sun Mar 11 14:59:09 IST 2018
Sun Mar 11 14:59:09 IST 2018
Sun Mar 11 14:59:10 IST 2018
Sun Mar 11 14:59:10 IST 2018
Sun Mar 11 14:59:11 IST 2018
Sun Mar 11 14:59:11 IST 2018
Sun Mar 11 14:59:12 IST 2018
Sun Mar 11 14:59:12 IST 2018
Sun Mar 11 14:59:13 IST 2018
Sun Mar 11 14:59:13 IST 2018

package com.me.pack;

import java.util.Date;

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;

@SpringBootApplication
@EnableScheduling
public class MetricsActuatorApplication {

public static void main(String[] args) {
SpringApplication.run(MetricsActuatorApplication.class, args);
}

@Scheduled(initialDelay=500, fixedDelay=500)
public void run() throws Exception {
// TODO Auto-generated method stub
System.out.println(new Date());
}
}

Now you can add @Scheduled annotations on methods which you want to scedule. Only condition is that methods should be without arguments.

ScheduledAnnotationBeanPostProcessor that will be created by the imported SchedulingConfiguration scans all declared beans for the presence of the @Scheduled annotations.

For every annotated method without arguments, the appropriate executor thread pool will be created. This thread pool will manage the scheduled invocation of the annotated method.

@Scheduled(initialDelay = 1000, fixedRate = 10000)
public void run() {
logger.info(“Current time is :: ” + Calendar.getInstance().getTime());
}

Add Jersey config related stuff:
==================================
package com.me.pack;

import org.glassfish.jersey.server.ResourceConfig;
import org.springframework.stereotype.Component;

@Component
public class JerseyConfig extends ResourceConfig{
public JerseyConfig() {
register(PlacesService.class);
}
}

JMS with default active MQ:
==========================
package com.me.pack;

import javax.jms.ConnectionFactory;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jms.DefaultJmsListenerContainerFactoryConfigurer;
import org.springframework.context.annotation.Bean;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.config.JmsListenerContainerFactory;
import org.springframework.jms.support.converter.MappingJackson2MessageConverter;
import org.springframework.jms.support.converter.MessageConverter;
import org.springframework.jms.support.converter.MessageType;

@SpringBootApplication
@EnableJms
public class MsgApplication {

public static void main(String[] args) {
SpringApplication.run(MsgApplication.class, args);
}

@Bean
public JmsListenerContainerFactory myFactory(
ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer)
{
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot’s default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot’s default if necessary.
return factory;
}

@Bean
public MessageConverter jacksonJmsMessageConverter()
{
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName(“_type”);
return converter;
}
}

package com.me.pack;

import org.springframework.jms.annotation.JmsListener;
import org.springframework.stereotype.Component;

@Component
public class MessageListener {

@JmsListener(destination=”jms.message.endpoint”)
public void recieveMessage(Message message) {
System.out.println(“Message recieved: “+message);
}
}

package com.me.pack;

import java.util.Date;

import org.springframework.boot.SpringApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.jms.core.JmsTemplate;

public class TestClass {
public static void main(String[] args) {
ConfigurableApplicationContext context=
SpringApplication.run(MsgApplication.class, args);
JmsTemplate jmsTemplate=context.getBean(JmsTemplate.class);
jmsTemplate.convertAndSend(
“jms.message.endpoint”, new Message(1001, “test body”, new Date()));
}
}

2018-03-11 16:14:56.093 INFO 14320 — [ main] com.me.pack.TestClass : Started TestClass in 3.582 seconds (JVM running for 4.634)
Message recieved: Message [id=1001, message=test body, date=Sun Mar 11 16:14:56 IST 2018]

@EnableJms
@JmsListener

jSP view:
=========
@Controller
public class IndexController {

@RequestMapping(“/”)
public String home(Map model) {
model.put(“message”, “HowToDoInJava Reader !!”);
return “index”;
}

@RequestMapping(“/next”)
public String next(Map model) {
model.put(“message”, “You are in new page !!”);
return “next”;
}

add prefix and suffix here:
spring.mvc.view.prefix=/WEB-INF/view/
spring.mvc.view.suffix=.jsp

//For detailed logging during development

logging.level.org.springframework=TRACE
logging.level.com=TRACE

@Configuration
@EnableWebMvc
@ComponentScan
public class MvcConfiguration extends WebMvcConfigurerAdapter
{
@Override
public void configureViewResolvers(ViewResolverRegistry registry) {
InternalResourceViewResolver resolver = new InternalResourceViewResolver();
resolver.setPrefix(“/WEB-INF/view/”);
resolver.setSuffix(“.jsp”);
resolver.setViewClass(JstlView.class);
registry.viewResolver(resolver);
}
}

lly to websecurityconfigureradpter -. configure method.

====================
Logging with YML:
—————————-
%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta}
%clr{—}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan}
%clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}
The above pattern print these listed log message parts with respective color coding applied:

Date and Time — Millisecond precision.
Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
Process ID.
A — separator to distinguish the start of actual log messages.
Thread name — Enclosed in square brackets (may be truncated for console output).
Logger name — This is usually the source class name (often abbreviated).
The log message

Color-coded logging output
If your terminal supports ANSI, color output will be used to aid readability. You can set spring.output.ansi.enabled value to either ALWAYS, NEVER or DETECT.

spring:
output:
ansi:
enabled: DETECT
Color coding is configured using the %clr conversion word. In its simplest form the converter will color the output according to the log level.

Home / Spring / Spring Boot / Spring Boot Logging with application.yml
Spring Boot Logging with application.yml
March 4, 2017 by Lokesh Gupta

Learn spring boot logging configuration via application.yml file in simple and easy to follow instructions. In the default structure of a Spring Boot web application, you can locate the application.yml file under the resources folder.

Read More: Spring Boot Logging with application.properties
Table of Contents

Understand default spring boot logging
Set logging level
Set logging pattern
Set logging output to file
Using active profiles to load environment specific logging configuration
Color-coded logging output

Understand default spring boot logging
To understand default spring boot logging, lets put logs in spring boot hello world example. Just to mention, there is no logging related configuration in application.yml file. If you see any configuration in downloaded application, please remove it.

private final Logger LOGGER = LoggerFactory.getLogger(this.getClass());

@RequestMapping(“/”)
public String home(Map model) {

LOGGER.debug(“This is a debug message”);
LOGGER.info(“This is an info message”);
LOGGER.warn(“This is a warn message”);
LOGGER.error(“This is an error message”);

model.put(“message”, “HowToDoInJava Reader !!”);
return “index”;
}
Start the application. Access application at browser and verify log messages in console.

2017-03-02 23:33:51.318 INFO 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:33:51.319 WARN 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:33:51.319 ERROR 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed
Note down the observation that Default logging level is INFO – because debug log message is not present.
There is fixed default log message pattern which is configured in different base configuration files.

%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta}
%clr{—}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan}
%clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}
The above pattern print these listed log message parts with respective color coding applied:

Date and Time — Millisecond precision.
Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
Process ID.
A — separator to distinguish the start of actual log messages.
Thread name — Enclosed in square brackets (may be truncated for console output).
Logger name — This is usually the source class name (often abbreviated).
The log message

Set logging level
When a message is logged via a Logger it is logged with a certain log level. In the application.yml file, you can define log levels of Spring Boot loggers, application loggers, Hibernate loggers, Thymeleaf loggers, and more. To set the logging level for any logger, add keys starting with logging.level.

Logging level can be one of one of TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF. The root logger can be configured using logging.level.root.

logging:
level:
root: ERROR
org.springframework.web: ERROR
com.howtodoinjava: DEBUG
org.hibernate: ERROR
In above configuration, I upgraded log level for application classes to DEBUG (from default INFO). Now observe the logs:

2017-03-02 23:57:14.966 DEBUG 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : debug log statement printed
2017-03-02 23:57:14.967 INFO 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:57:14.967 WARN 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:57:14.967 ERROR 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed

Set logging pattern
To change the loging patterns, use logging.pattern.console and logging.pattern.file keys.

logging:
pattern:
console: %d{yyyy-MM-dd HH:mm:ss} – %msg%n
file: %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} – %msg%n
After changing console logging pattern in application, log statements are printed as below:

2017-03-03 12:59:13 – This is a debug message
2017-03-03 12:59:13 – This is an info message
2017-03-03 12:59:13 – This is a warn message
2017-03-03 12:59:13 – This is an error message

Set logging output to file
To print the logs in file, use logging.file or logging.path key.

logging:
file: /logs/application-debug.log
Verify the logs in file.

2017-03-03 13:02:50.608 DEBUG 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a debug message
2017-03-03 13:02:50.608 INFO 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an info message
2017-03-03 13:02:50.608 WARN 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a warn message
2017-03-03 13:02:50.609 ERROR 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an error message

Using active profiles to load environment specific logging configuration
It is desirable to have multiple configurations for any application – where each configuration is specific to a particular runtime environment. In spring boot, you can achieve this by creating multiple application-{profile}.yml files in same location as application.yml file.

Profile specific keys always override the non-profile specific ones. If several profiles are specified, a last wins strategy applies.

If I have two environments for my application i.e. prod and dev. Then I will create two profile specific yml files.

application-dev.yml

logging:
file: logs/application-debug.log
pattern:
console: “%d %-5level %logger : %msg%n”
file: “%d %-5level [%thread] %logger : %msg%n”
level:
org.springframework.web: ERROR
com.howtodoinjava: DEBUG
org.hibernate: ERROR
application-prod.yml

logging:
file: logs/application-debug.log
pattern:
console: “%d %-5level %logger : %msg%n”
file: “%d %-5level [%thread] %logger : %msg%n”
level:
org.springframework.web: ERROR
com.howtodoinjava: INFO
org.hibernate: ERROR
To supply profile information to application, key spring.profiles.active is passed to runtime.

$ java -jar -Dspring.profiles.active=prod spring-boot-demo.jar

==> Set profiles via cmd at the time of launching:
$ java -jar -Dspring.profiles.active=prod spring-boot-demo.jar

with properties:
==================
Understand default spring boot logging
Set logging level
Set logging pattern
Set logging output to file
Using active profiles to load environment specific logging configuration
Color-coded logging output

Understand default spring boot logging
To understand default spring boot logging, lets put logs in spring boot hello world example. Just to mention, there is no logging related configuration in application.properties file. If you see any configuration in downloaded application, please remove it.

private final Logger LOGGER = LoggerFactory.getLogger(this.getClass());

@RequestMapping(“/”)
public String home(Map model) {

LOGGER.debug(“This is a debug message”);
LOGGER.info(“This is an info message”);
LOGGER.warn(“This is a warn message”);
LOGGER.error(“This is an error message”);

model.put(“message”, “HowToDoInJava Reader !!”);
return “index”;
}
Start the application. Access application at browser and verify log messages in console.

2017-03-02 23:33:51.318 INFO 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:33:51.319 WARN 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:33:51.319 ERROR 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed
Note down the observation that Default logging level is INFO – because debug log message is not present.
There is fixed default log message pattern which is configured in different base configuration files.

%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta}
%clr{—}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan} %clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}
The above pattern print these listed log message parts with respective color coding applied:

Date and Time — Millisecond precision.
Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
Process ID.
A — separator to distinguish the start of actual log messages.
Thread name — Enclosed in square brackets (may be truncated for console output).
Logger name — This is usually the source class name (often abbreviated).
The log message

Set logging level
When a message is logged via a Logger it is logged with a certain log level. In the application.properties file, you can define log levels of Spring Boot loggers, application loggers, Hibernate loggers, Thymeleaf loggers, and more. To set the logging level for any logger, add properties starting with logging.level.

Logging level can be one of one of TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF. The root logger can be configured using logging.level.root.

#logging.level.root=WARN

logging.level.org.springframework.web=ERROR
logging.level.com.howtodoinjava=DEBUG
In above configuration, I upgraded log level for application classes to DEBUG (from default INFO). Now observe the logs:

2017-03-02 23:57:14.966 DEBUG 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : debug log statement printed
2017-03-02 23:57:14.967 INFO 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:57:14.967 WARN 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:57:14.967 ERROR 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed

Set logging pattern
To change the loging patterns, use logging.pattern.console and logging.pattern.file properties.

# Logging pattern for the console
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} – %msg%n

# Logging pattern for file
logging.pattern.file=%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} – %msg%n
After changing console logging pattern in application, log statements are printed as below:

2017-03-03 12:59:13 – This is a debug message
2017-03-03 12:59:13 – This is an info message
2017-03-03 12:59:13 – This is a warn message
2017-03-03 12:59:13 – This is an error message

Set logging output to file
To print the logs in file, use logging.file or logging.path property.

logging.file=c:/users/howtodoinjava/application-debug.log
Verify the logs in file.

2017-03-03 13:02:50.608 DEBUG 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a debug message
2017-03-03 13:02:50.608 INFO 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an info message
2017-03-03 13:02:50.608 WARN 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a warn message
2017-03-03 13:02:50.609 ERROR 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an error message

Using active profiles to load environment specific logging configuration
It is desirable to have multiple configurations for any application – where each configuration is specific to a particular runtime environment. In spring boot, you can achieve this by creating multiple application-{profile}.properties files in same location as application.properties file.

Profile specific properties always override the non-profile specific ones. If several profiles are specified, a last wins strategy applies.

If I have two environments for my application i.e. prod and dev. Then I will create two profile specific properties files.

application-dev.properties

logging.level.com.howtodoinjava=DEBUG
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} – %msg%n
application-prod.properties

logging.level.com.howtodoinjava=ERROR
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} – %msg%n
To supply profile information to application, property spring.profiles.active is passed to runtime.

$ java -jar -Dspring.profiles.active=prod spring-boot-demo.jar

Color-coded logging output
If your terminal supports ANSI, color output will be used to aid readability. You can set spring.output.ansi.enabled value to either ALWAYS, NEVER or DETECT.

Color coding is configured using the %clr conversion word. In its simplest form the converter will color the output according to the log level.

FATAL and ERROR – Red
WARN – Yellow
INFO, DEBUG and TRACE – Green

Drop me your questions in comments section.

Happy Learning !!

References:

Security ion webservice by ContainerrequestFilter:
==================================================
arn to create JAX-RS 2.0 REST APIs using Spring Boot and Jersey framework, and add role based security using JAX-RS annotations e.g. @PermitAll, @RolesAllowed or @DenyAll.

at webmethod we have to place @ApermiltAll,@RolesAllowed.
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.Base64;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.StringTokenizer;

import javax.annotation.security.DenyAll;
import javax.annotation.security.PermitAll;
import javax.annotation.security.RolesAllowed;
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ResourceInfo;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.Provider;

/**
* This filter verify the access permissions for a user based on
* user name and password provided in request
* */
@Provider
public class SecurityFilter implements javax.ws.rs.container.ContainerRequestFilter
{
private static final String AUTHORIZATION_PROPERTY = “Authorization”;
private static final String AUTHENTICATION_SCHEME = “Basic”;
private static final Response ACCESS_DENIED = Response.status(Response.Status.UNAUTHORIZED).build();
private static final Response ACCESS_FORBIDDEN = Response.status(Response.Status.FORBIDDEN).build();
private static final Response SERVER_ERROR = Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();

@Context
private ResourceInfo resourceInfo;

@Override
public void filter(ContainerRequestContext requestContext)
{
Method method = resourceInfo.getResourceMethod();
//Access allowed for all
if( ! method.isAnnotationPresent(PermitAll.class))
{
//Access denied for all
if(method.isAnnotationPresent(DenyAll.class))
{
requestContext.abortWith(ACCESS_FORBIDDEN);
return;
}

//Get request headers
final MultivaluedMap headers = requestContext.getHeaders();

//Fetch authorization header
final List authorization = headers.get(AUTHORIZATION_PROPERTY);

//If no authorization information present; block access
if(authorization == null || authorization.isEmpty())
{
requestContext.abortWith(ACCESS_DENIED);
return;
}

//Get encoded username and password
final String encodedUserPassword = authorization.get(0).replaceFirst(AUTHENTICATION_SCHEME + ” “, “”);

//Decode username and password
String usernameAndPassword = null;
try {
usernameAndPassword = new String(Base64.getDecoder().decode(encodedUserPassword));
} catch (Exception e) {
requestContext.abortWith(SERVER_ERROR);
return;
}

//Split username and password tokens
final StringTokenizer tokenizer = new StringTokenizer(usernameAndPassword, “:”);
final String username = tokenizer.nextToken();
final String password = tokenizer.nextToken();

//Verifying Username and password
if(!(username.equalsIgnoreCase(“admin”) && password.equalsIgnoreCase(“password”))){
requestContext.abortWith(ACCESS_DENIED);
return;
}

//Verify user access
if(method.isAnnotationPresent(RolesAllowed.class))
{
RolesAllowed rolesAnnotation = method.getAnnotation(RolesAllowed.class);
Set rolesSet = new HashSet(Arrays.asList(rolesAnnotation.value()));

//Is user valid?
if( ! isUserAllowed(username, password, rolesSet))
{
requestContext.abortWith(ACCESS_DENIED);
return;
}
}
}
}
private boolean isUserAllowed(final String username, final String password, final Set rolesSet)
{
boolean isAllowed = false;

//Step 1. Fetch password from database and match with password in argument
//If both match then get the defined role for user from database and continue; else return isAllowed [false]
//Access the database and do this part yourself
//String userRole = userMgr.getUserRole(username);
String userRole = “ADMIN”;

//Step 2. Verify user role
if(rolesSet.contains(userRole))
{
isAllowed = true;
}
return isAllowed;
}
}

ContainerRequestFilter

Message-driven beans can implement any messaging type. Most commonly, they implement the Java Message Service (JMS) technology.
public class TransactionTemplate
extends DefaultTransactionDefinition
implements TransactionOperations, InitializingBean
Template class that simplifies programmatic transaction demarcation and transaction exception handling.
The central method is execute(org.springframework.transaction.support.TransactionCallback), supporting transactional code that implements the TransactionCallback interface. This template handles the transaction lifecycle and possible exceptions such that neither the TransactionCallback implementation nor the calling code needs to explicitly handle transactions.

As an atomic counter (incrementAndGet(), etc) that can be used by many threads concurrently

As a primitive that supports compare-and-swap instruction (compareAndSet()) to implement non-blocking algorithms.

for making a thread as non-daemon and still has to be alive..
private final CountDownLatch keepAliveLatch = new CountDownLatch(1);
 
    public KeepThreadAliveExample() {
        keepAliveThread = new Thread(new Runnable() {
            @Override
            public void run() {
                try {
                    System.out.println(Thread.currentThread().getName() + ” waiting…”);
                    keepAliveLatch.await();
                } catch (InterruptedException e) {
                }
            }
        }, “KeepThreadAliveThread”);
        keepAliveThread.setDaemon(false);
        // keep this thread alive (non daemon thread) until we shutdown
        Runtime.getRuntime().addShutdownHook(new Thread() {
            @Override
            public void run() {
                System.out.println(“Thread shutdownhook called”);
                keepAliveLatch.countDown();
            }

private Link getHateousLink(Employee employee) {
Link link = ControllerLinkBuilder.linkTo(EmployeeController.class).slash(“findBy/” + employee.getName())
.withSelfRel();
return link;
}

@SpringBootApplication is a convenience annotation that adds all of the following:

@Configuration tags the class as a source of bean definitions for the application context.

@EnableAutoConfiguration tells Spring Boot to start adding beans based on classpath settings, other beans, and various property settings.

Normally you would add @EnableWebMvc for a Spring MVC app, but Spring Boot adds it automatically when it sees spring-webmvc on the classpath. This flags the application as a web application and activates key behaviors such as setting up a DispatcherServlet.

@ComponentScan tells Spring to look for other components, configurations, and services in the hello package, allowing it to find the controllers.

String html = “

An example link.

“;
Document doc = Jsoup.parse(html);
String text = doc.body().text();

https://dzone.com/articles/stack-vs-heap-understanding-java-memory-allocation

A third generation closely related to the tenured generation is the permanent generation. The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation.
You could consider the “Method Area” a subset of “PermGen”, as the permanent generation space does hold class defintions, but it also holds interned Strings and other bits of data unlikely to ever be discarded,

2.5.6. Native Method Stacks
An implementation of the Java Virtual Machine may use conventional stacks, colloquially called “C stacks,” to support native methods (methods written in a language other than the Java programming language). Native method stacks may also be used by the implementation of an interpreter for the Java Virtual Machine’s instruction set in a language such as C. Java Virtual Machine implementations that cannot load native methods and that do not themselves rely on conventional stacks need not supply native method stacks. If supplied, native method stacks are typically allocated per thread when each thread is created.

This specification permits native method stacks either to be of a fixed size or to dynamically expand and contract as required by the computation. If the native method stacks are of a fixed size, the size of each native method stack may be chosen independently when that stack is created.

A Java Virtual Machine implementation may provide the programmer or the user control over the initial size of the native method stacks, as well as, in the case of varying-size native method stacks, control over the maximum and minimum method stack sizes.

The following exceptional conditions are associated with native method stacks:

If the computation in a thread requires a larger native method stack than is permitted, the Java Virtual Machine throws a StackOverflowError.

If native method stacks can be dynamically expanded and native method stack expansion is attempted but insufficient memory can be made available, or if insufficient memory can be made available to create the initial native method stack for a new thread, the Java Virtual Machine throws an OutOfMemoryError.

2.5.5. Run-Time Constant Pool
A run-time constant pool is a per-class or per-interface run-time representation of the constant_pool table in a class file (§4.4). It contains several kinds of constants, ranging from numeric literals known at compile-time to method and field references that must be resolved at run-time. The run-time constant pool serves a function similar to that of a symbol table for a conventional programming language, although it contains a wider range of data than a typical symbol table.

Each run-time constant pool is allocated from the Java Virtual Machine’s method area (§2.5.4). The run-time constant pool for a class or interface is constructed when the class or interface is created (§5.3) by the Java Virtual Machine.

The following exceptional condition is associated with the construction of the run-time constant pool for a class or interface:

When creating a class or interface, if the construction of the run-time constant pool requires more memory than can be made available in the method area of the Java Virtual Machine, the Java Virtual Machine throws an OutOfMemoryError.
Constant Type Value
CONSTANT_Class 7
CONSTANT_Fieldref 9
CONSTANT_Methodref 10
CONSTANT_InterfaceMethodref 11
CONSTANT_String 8
CONSTANT_Integer 3
CONSTANT_Float 4
CONSTANT_Long 5
CONSTANT_Double 6
CONSTANT_NameAndType 12
CONSTANT_Utf8 1
CONSTANT_MethodHandle 15
CONSTANT_MethodType 16
CONSTANT_InvokeDynamic 18

The Java Virtual Machine has a method area that is shared among all Java Virtual Machine threads. The method area is analogous to the storage area for compiled code of a conventional language or analogous to the “text” segment in an operating system process. It stores per-class structures such as the run-time constant pool, field and method data, and the code for methods and constructors, including the special methods (§2.9) used in class and instance initialization and interface initialization.

The method area is created on virtual machine start-up. Although the method area is logically part of the heap, simple implementations may choose not to either garbage collect or compact it. This version of the Java Virtual Machine specification does not mandate the location of the method area or the policies used to manage compiled code. The method area may be of a fixed size or may be expanded as required by the computation and may be contracted if a larger method area becomes unnecessary. The memory for the method area does not need to be contiguous.

A Java Virtual Machine implementation may provide the programmer or the user control over the initial size of the method area, as well as, in the case of a varying-size method area, control over the maximum and minimum method area size.

The following exceptional condition is associated with the method area:

If memory in the method area cannot be made available to satisfy an allocation request, the Java Virtual Machine throws an OutOfMemoryError.

2.5.3. Heap
The Java Virtual Machine has a heap that is shared among all Java Virtual Machine threads. The heap is the run-time data area from which memory for all class instances and arrays is allocated.

The heap is created on virtual machine start-up. Heap storage for objects is reclaimed by an automatic storage management system (known as a garbage collector); objects are never explicitly deallocated. The Java Virtual Machine assumes no particular type of automatic storage management system, and the storage management technique may be chosen according to the implementor’s system requirements. The heap may be of a fixed size or may be expanded as required by the computation and may be contracted if a larger heap becomes unnecessary. The memory for the heap does not need to be contiguous.

A Java Virtual Machine implementation may provide the programmer or the user control over the initial size of the heap, as well as, if the heap can be dynamically expanded or contracted, control over the maximum and minimum heap size.

The following exceptional condition is associated with the heap:

If a computation requires more heap than can be made available by the automatic storage management system, the Java Virtual Machine throws an OutOfMemoryError.

. The pc Register
The Java Virtual Machine can support many threads of execution at once (JLS §17). Each Java Virtual Machine thread has its own pc (program counter) register. At any point, each Java Virtual Machine thread is executing the code of a single method, namely the current method (§2.6) for that thread. If that method is not native, the pc register contains the address of the Java Virtual Machine instruction currently being executed. If the method currently being executed by the thread is native, the value of the Java Virtual Machine’s pc register is undefined. The Java Virtual Machine’s pc register is wide enough to hold a returnAddress or a native pointer on the specific platform.

Each Java Virtual Machine thread has a private Java Virtual Machine stack, created at the same time as the thread. A Java Virtual Machine stack stores frames (§2.6). A Java Virtual Machine stack is analogous to the stack of a conventional language such as C: it holds local variables and partial results, and plays a part in method invocation and return. Because the Java Virtual Machine stack is never manipulated directly except to push and pop frames, frames may be heap allocated. The memory for a Java Virtual Machine stack does not need to be contiguous.

In The Java Virtual Machine Specification, First Edition, the Java Virtual Machine stack was known as the Java stack.

This specification permits Java Virtual Machine stacks either to be of a fixed size or to dynamically expand and contract as required by the computation. If the Java Virtual Machine stacks are of a fixed size, the size of each Java Virtual Machine stack may be chosen independently when that stack is created.

A Java Virtual Machine implementation may provide the programmer or the user control over the initial size of Java Virtual Machine stacks, as well as, in the case of dynamically expanding or contracting Java Virtual Machine stacks, control over the maximum and minimum sizes.

The following exceptional conditions are associated with Java Virtual Machine stacks:

If the computation in a thread requires a larger Java Virtual Machine stack than is permitted, the Java Virtual Machine throws a StackOverflowError.

If Java Virtual Machine stacks can be dynamically expanded, and expansion is attempted but insufficient memory can be made available to effect the expansion, or if insufficient memory can be made available to create the initial Java Virtual Machine stack for a new thread, the Java Virtual Machine throws an OutOfMemoryError.

Mark and Sweep Model of Garbage collection
JVM uses the mark and sweep garbage collection model for performing garbage collection of the whole heap. A mark and sweep garbage collection consists of two phases, the mark phase and the sweep phase.

During the mark phase, all the objects that are reachable from Java threads, native handlers and other root sources are marked as alive, as well as the objects that are reachable from these objects and so forth. This process identifies and marks all objects that are still used, and the rest can be considered garbage.

During the sweep phase, the heap is traversed to find the gaps between the live objects. These gaps are recorded in a free list and are made available for new object allocation. Java Garbage Collection Types

Code Cache

When a Java program is run, it executes the code in a tiered manner. In the first tier, it uses client compiler (C1 compiler) in order to compile the code with instrumentation. The profiling data is used in the second tier (C2 compiler) for the server compiler, to compile that code in an optimized manner. Tiered compilation is not enabled by default in Java 7, but is enabled in Java 8.

The Just-In-Time (JIT) compiler stores the compiled code in an area called code cache. It is a special heap that holds the compiled code. This area is flushed if its size exceeds a threshold and these objects are not relocated by the GC.

Some of the performance issues and the problem of the compiler not getting re-enabled has been addressed in Java 8 and one of the solution to avoid these issues in Java 7 is to increase the size of the code cache up to a point never being reached.

compaction:
fragmenting GC into chunks like harddisk fragment ATION.

The moment your object is getting de-referenced (eligible for Garbage Collection) there is scope for compaction to occur. This is because of the fact that you start fragmenting your heap, much like your hard drive getting fragmented.

So isn’t it reasonable to assume that with a constant memory footprint a Java application can run without any GC-introduced latencies, in other words, with no GC pauses?

Advertisements

https://www.concretepage.com/spring/spring-bean-life-cycle-tutorial -cOPIED

Home > Spring Core
Spring Bean Life Cycle Tutorial
By Arvind Rai, June 17, 2016
This page will walk through spring bean life cycle tutorial. In spring bean life cycle, initialization and destruction callbacks are involved. Different spring bean aware classes are also called during bean life cycle. Once dependency injection is completed, initialization callback methods execute. Their purposes are to check the values that have been set in bean properties, perform any custom initialization or provide a wrapper on original bean etc. Once the initialization callbacks are completed, bean is ready to be used. When IoC container is about to remove bean, destruction callback methods execute. Their purposes are to release the resources held by bean or to perform any other finalization tasks. When more than one initialization and destructions callback methods have been implemented by bean, then those methods execute in certain order. Here on this page we will discuss spring bean life cycle step by step. We will discuss the order of execution of initialization and destruction callbacks as well as spring bean aware classes.
Contents
Spring Bean Life Cycle Diagram and Steps
Initialization Callbacks
Destruction Callbacks
Spring Bean Life Cycle Order
BeanNameAware
BeanFactoryAware
BeanPostProcessor
@PostConstruct and @PreDestroy
InitializingBean and DisposableBean
init-method and destroy-method in XML and initMethod and destroyMethod in JavaConfig
Download Source Code

Spring Bean Life Cycle Diagram and Steps
Find the spring bean life cycle diagram. Here are showing the steps involved in spring bean life cycle.
Spring Bean Life Cycle Tutorial
A bean life cycle includes the following steps.
1. Within IoC container, a spring bean is created using class constructor.
2. Now the dependency injection is performed using setter method.
3. Once the dependency injection is completed, BeanNameAware.setBeanName() is called. It sets the name of bean in the bean factory that created this bean.
4. Now BeanClassLoaderAware.setBeanClassLoader() is called that supplies the bean class loader to a bean instance.
5. Now BeanFactoryAware.setBeanFactory() is called that provides the owning factory to a bean instance.
6. Now the IoC container calls BeanPostProcessor.postProcessBeforeInitialization on the bean. Using this method a wrapper can be applied on original bean.
7. Now the method annotated with @PostConstruct is called.
8. After @PostConstruct, the method InitializingBean.afterPropertiesSet() is called.
9. Now the method specified by init-method attribute of bean in XML configuration is called.
10. And then BeanPostProcessor.postProcessAfterInitialization() is called. It can also be used to apply wrapper on original bean.
11. Now the bean instance is ready to be used. Perform the task using the bean.
12. Now when the ApplicationContext shuts down such as by using registerShutdownHook() then the method annotated with @PreDestroy is called.
13. After that DisposableBean.destroy() method is called on the bean.
14. Now the method specified by destroy-method attribute of bean in XML configuration is called.
15. Before garbage collection, finalize() method of Object is called.
Initialization Callbacks
In the bean life cycle initialization callbacks are those methods which are called just after the properties of the bean has been set by IoC container. The spring InitializingBean has a method as afterPropertiesSet() which performs initialization work after the bean properties has been set. Using InitializingBean is not being recommended by spring because it couples the code. We should use @PostConstruct or method specified by bean attribute init-method in XML which is the same as initMethod attribute of @Bean annotation in JavaConfig. If all the three are used together, they will be called in below order in bean life cycle.
1. First @PostConstruct will be called.
2. Then InitializingBean.afterPropertiesSet() is called
3. And then method specified by bean init-method in XML or initMethod of @Bean in JavaConfig.

Destruction Callbacks
In bean life cycle when a bean is destroyed from the IoC container, destruction callback is called. To get the destruction callback, bean should implement spring DisposableBean interface and the method destroy() will be called. Spring recommends not to use DisposableBean because it couples the code. As destruction callback we should use @PreDestroy annotation or bean attribute destroy-method in XML configuration which is same as destroyMethod attribute of @Bean in JavaConfig. If we use all these callbacks together then they will execute in following order in bean life cycle.
1. First @PreDestroy will be called.
2. After that DisposableBean.destroy() will be called.
3. And then method specified by bean destroy-method in XML configuration is called.
Spring Bean Life Cycle Order
Here we will provide a demo in which we will use all initialization and destructions callbacks and bean aware to check their order of execution in bean life cycle. We will use XML configuration.
build.gradle
apply plugin: 'java'
apply plugin: 'eclipse'
archivesBaseName = 'SpringDemo'
version = '1'
repositories {
mavenCentral()
}
dependencies {
compile 'org.springframework.boot:spring-boot-starter:1.3.3.RELEASE'
compile 'javax.inject:javax.inject:1'
} Book.java
package com.concretepage;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanClassLoaderAware;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.BeanNameAware;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
public class Book implements InitializingBean, DisposableBean, BeanFactoryAware, BeanNameAware, BeanClassLoaderAware {
private String bookName;
private Book() {
System.out.println("---inside constructor---");
}
@Override
public void setBeanClassLoader(ClassLoader classLoader) {
System.out.println("---BeanClassLoaderAware.setBeanClassLoader---");
}
@Override
public void setBeanName(String name) {
System.out.println("---BeanNameAware.setBeanName---");
}
public void myPostConstruct() {
System.out.println("---init-method---");
}
@PostConstruct
public void springPostConstruct() {
System.out.println("---@PostConstruct---");
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
System.out.println("---BeanFactoryAware.setBeanFactory---");
}
@Override
public void afterPropertiesSet() throws Exception {
System.out.println("---InitializingBean.afterPropertiesSet---");
}
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
System.out.println("setBookName: Book name has set.");
}
public void myPreDestroy() {
System.out.println("---destroy-method---");
}
@PreDestroy
public void springPreDestroy() {
System.out.println("---@PreDestroy---");
}
@Override
public void destroy() throws Exception {
System.out.println("---DisposableBean.destroy---");
}
@Override
protected void finalize() {
System.out.println("---inside finalize---");
}
} MyBeanPostProcessor.java
package com.concretepage;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanPostProcessor;
public class MyBeanPostProcessor implements BeanPostProcessor {
@Override
public Object postProcessAfterInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("BeanPostProcessor.postProcessAfterInitialization");
return bean;
}
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("BeanPostProcessor.postProcessBeforeInitialization");
return bean;
}
} spring-config.xml

SpringDemo.java
package com.concretepage;
import org.springframework.context.support.AbstractApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class SpringDemo {
public static void main(String[] args) {
AbstractApplicationContext context = new ClassPathXmlApplicationContext("spring-config.xml");
Book book = (Book)context.getBean("book");
System.out.println("Book Name:"+ book.getBookName());
context.registerShutdownHook();
}
} Output
---inside constructor---
setBookName: Book name has set.
---BeanNameAware.setBeanName---
---BeanClassLoaderAware.setBeanClassLoader---
---BeanFactoryAware.setBeanFactory---
BeanPostProcessor.postProcessBeforeInitialization
---@PostConstruct---
---InitializingBean.afterPropertiesSet---
---init-method---
BeanPostProcessor.postProcessAfterInitialization
Book Name:Mahabharat
---@PreDestroy---
---DisposableBean.destroy---
---destroy-method---

BeanNameAware
In bean life cycle org.springframework.beans.factory.BeanNameAware interface is aware of bean name in bean factory. This interface needs to be implemented by the bean and the method setBeanName() of BeanNameAware should be implemented. BeanNameAware.setBeanName() is called just after the dependency injection is completed. Find the sample example. Book.java
package com.concretepage;
import org.springframework.beans.factory.BeanNameAware;
public class Book implements BeanNameAware {
private String bookName;
@Override
public void setBeanName(String name) {
System.out.println("Bean Name:" + name);
}
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
}
} AppConfig.java
package com.concretepage;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class AppConfig {
@Bean(name = "myBook")
public Book getBean() {
Book book = new Book();
book.setBookName("Mahabharat");
return book;
}
} SpringDemo.java
package com.concretepage;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
public class SpringDemo {
public static void main(String[] args) {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.register(AppConfig.class);
ctx.refresh();
Book book = ctx.getBean(Book.class);
System.out.println("Book Name:"+ book.getBookName());
ctx.close();
}
} Output
Bean Name:myBook
Book Name:Mahabharat
BeanFactoryAware
In bean life cycle org.springframework.beans.factory.BeanFactoryAware interface is implemented by beans when it wants to aware of its owning BeanFactory. We need to override setBeanFactory() method. This method is called just after the dependency injection is completed. Using this method we can change the bean properties value. Find the example.
Book.java
package com.concretepage;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
public class Book implements BeanFactoryAware {
private String bookName;
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
Book b = beanFactory.getBean(Book.class);
b.setBookName(getBookName()+"-Updated");
}
} Output
Book Name:Mahabharat-Updated

BeanPostProcessor
org.springframework.beans.factory.config.BeanPostProcessor interface is used for custom modification of newly created bean properties. To use BeanPostProcessor we need to create a class and override its two method postProcessBeforeInitialization() and postProcessAfterInitialization(). In bean life cycle BeanPostProcessor is called before and after initialization callbacks such as InitializingBean.afterPropertiesSet(), @PostConstruct and init-method. Here in our example we will use InitializingBean.afterPropertiesSet() and this will be called between postProcessBeforeInitialization() and postProcessAfterInitialization() methods of BeanPostProcessor.
MyBeanPostProcessor.java
package com.concretepage;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.stereotype.Component;
@Component
public class MyBeanPostProcessor implements BeanPostProcessor {
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("postProcessBeforeInitialization: Bean Name- " + beanName);
if (bean instanceof Book) {
Book b = (Book)bean;
b.setBookName(b.getBookName()+"-Before");
}
return bean;
}
@Override
public Object postProcessAfterInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("postProcessAfterInitialization: Bean Name- " + beanName);
if (bean instanceof Book) {
Book b = (Book)bean;
b.setBookName(b.getBookName()+"-After");
}
return bean;
}
} Book.java
package com.concretepage;
import org.springframework.beans.factory.InitializingBean;
public class Book implements InitializingBean {
private String bookName;
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
System.out.println("---Inside setBookName---");
}
@Override
public void afterPropertiesSet() throws Exception {
System.out.println("---afterPropertiesSet---");
bookName = bookName + "-Hello";
}
} AppConfig.java
package com.concretepage;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
@Configuration
@ComponentScan(basePackages="com.concretepage")
public class AppConfig {
@Bean(name = "myBook")
public Book getBean() {
Book book = new Book();
book.setBookName("Mahabharat");
return book;
}
} Output
---Inside setBookName---
postProcessBeforeInitialization: Bean Name- myBook
---Inside setBookName---
---afterPropertiesSet---
postProcessAfterInitialization: Bean Name- myBook
---Inside setBookName---
Book Name:Mahabharat-Before-Hello-After
@PostConstruct and @PreDestroy
Here we will discuss the role of JSR-250 @PostConstruct and @PreDestroy annotation in spring bean life cycle. Spring recommends these annotations to use as initialization and destruction callbacks. @PostConstruct annotated method executes just after dependency injection is completed to perform any initialization. It is customary to specify name as init(). @PreDestroy annotated method executes before the bean is being removed from spring container. It is commonly used to release resources held by bean. It is customary to specify name as destroy(). This method is called on calling of close() method of spring context. Find the sample example.
Book.java
package com.concretepage;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
public class Book {
private String bookName;
@PostConstruct
public void init() {
System.out.println("inside init()");
}
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
System.out.println("---Inside setBookName---");
}
@PreDestroy
public void destroy() {
System.out.println("inside destroy()");
}
} Output
---Inside setBookName---
inside init()
Book Name:Mahabharat
inside destroy()
InitializingBean and DisposableBean
In bean life cycle org.springframework.beans.factory.InitializingBean interface is a initialization callback whose method afterPropertiesSet() executes once the dependency injection is completed. This method is used to check if all properties have been initialized or to perform any custom initialization. org.springframework.beans.factory.DisposableBean interface is a destruction callback whose method destroy() executes before bean is going to be removed from spring container. This method is used to release resources and is called on calling of close() method of spring context.
Book.java
package com.concretepage;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
public class Book implements InitializingBean, DisposableBean {
private String bookName;
@Override
public void afterPropertiesSet() throws Exception {
System.out.println("Inside afterPropertiesSet()");
bookName+= "-Updated";
}
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
System.out.println("---Inside setBookName---");
}
@Override
public void destroy() throws Exception {
System.out.println("Inside dispose()");
}
} Output
---Inside setBookName---
Inside afterPropertiesSet()
Book Name:Mahabharat-Updated
Inside dispose()
init-method and destroy-method in XML and initMethod and destroyMethod in JavaConfig
In spring bean life cycle init-method and destroy-method attributes are used to specify initialization and destruction callbacks custom method respectively in XML configuration. The equivalent attributes in JavaConfig are initMethod and destroyMethod respectively in a @Bean annotation. The customary method name for init-method or initMethod is init(). The customary method name for destroy-method or destroyMethod is destroy(). init() is called just after bean properties are set. This method is used to perform any custom initialization or to check if the values are set to the properties. destroy() method is called to release any resources before the spring container removes the bean. destroy() will be called before bean is removed from spring container. This method is called on calling of close() method of spring context. Find the example.
Book.java
package com.concretepage;
public class Book {
private String bookName;
public void init() {
System.out.println("inside init()");
}
public String getBookName() {
return bookName;
}
public void setBookName(String bookName) {
this.bookName = bookName;
System.out.println("---Inside setBookName---");
}
public void destroy() {
System.out.println("inside destroy()");
}
} AppConfig.java
package com.concretepage;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class AppConfig {
@Bean(name = "myBook", initMethod="init", destroyMethod="destroy")
public Book getBean() {
Book book = new Book();
book.setBookName("Mahabharat");
return book;
}
} In XML configuration init-method and destroy-method attributes are declared as follows.

Output
---Inside setBookName---
inside init()
Book Name:Mahabharat
inside destroy()

Now I am done. Happy spring learning!

Collection vs Arrays:
=====================
Collection is re-sizable or dynamically draw-able memory.
Provides useful data structures in the form of predefined classes that reduces programming affords.
It support to store heterogeneous elements or object.
It provides higher performance.
It provides Extendability (depends on incoming flow of data,if the size of collection framework variable is increasing than the collection framework variable is containing Extendability feature).
It provides adaptability facility( The process of adding the content of one collection framework variable to another collection framework either in the beginning or in the ending or in the middle in known as adaptability).
It is one of the algorithmic oriented.
It provides in-built sorting technique.
It provides in-built searching technique.
It provides higher preliminary concepts of Data Structure such as:- Stack,Queue,LinkedList,Trees ..etc.
Array Collection:
——————-
1 Arrays are fixed in size and hence once we created an array we are not allowed to increase or decrease the size based on our requirement. Collections are grow-able in nature and hence based on our requirement we can increase or decrease the size.
2 Arrays can hold both primitives as well as objects. Collections can hold only objects but not primitive.
3 Performance point of view arrays faster than collection Performance point of view collections are slower than array
4 Arrays can hold only homogeneous elements. Collections can hold both homogeneous and heterogeneous elements.
5 Memory point of view arrays are not recommended to use. Memory point of view collections are recommended to use.
6 For any requirement, there is no ready method available. For every requirement ready made method support is available.

It’s easy if you think of it like this: Collections are better than object arrays in basically every way imaginable.

You should prefer List over Foo[] whenever possible. Consider:

A collection can be mutable or immutable. A nonempty array must always be mutable.
A collection can be thread-safe; even concurrent. An array is never safe to publish to multiple threads.
A collection can allow or disallow null elements. An array must always permit null elements.
A collection is type-safe; an array is not. Because arrays “fake” covariance, ArrayStoreException can result at runtime.
A collection can hold a non-reifiable type (e.g. List<Class> or List<Optional>). With an array you get compilation warnings and confusing runtime exceptions.
A collection has a fully fleshed-out API; an array has only set-at-index, get-at-index and length.
A collection can have views (unmodifiable, subList, filter…). No such luck for an array.
A list or set’s equals, hashCode and toString methods do what users expect; those methods on an array do anything but what you expect — a common source of bugs.
Because of all the reasons above, third-party libraries like Guava won’t bother adding much additional support for arrays, focusing only on collections, so there is a network effect.
Object arrays will never be first-class citizens in Java.

A few of the reasons above are covered in much greater detail in Effective Java, Second Edition, starting at page 119.

So, why would you ever use object arrays?

You have to interact with an API that uses them, and you can’t fix that API
so convert to/from a List as close to that API as you can
You have a reliable benchmark that shows you’re actually getting better performance with them
but benchmarks can lie, and often do
I can’t think of any other reasons

Collection(I) :
===============
Top-most interface of all collections.
No concreate IMPL for this interfaces .
Al collection elements should directly or in-directly IMPL this one.
provides all common methods like add,addAll,contain,remove,toArray,equals etc.
At leaset IMPL classes conatins two constructors like default and one is with collection as a argument.
contains method internally uses equals logic to find the existance. (o==null ? e==null : o.equals(e)).
You can create a collection elemt from another collection elemt by constructor but if un-suppotred/non-null alllowed collection trying
to map then it willl throw NullPointerException or ClassCastException.
Map map=new HashMap();
map.put(“one”, “hdjhfbhd”);
map.put(“two”, null);

trying to insert null into Hashtable
Map hashTable=new Hashtable(map);
///Map hashTable2=new Hashtable(integers);
System.out.println(map);

Exception in thread “main” java.lang.NullPointerException
at java.util.Hashtable.put(Unknown Source)
at java.util.Hashtable.putAll(Unknown Source)
at java.util.Hashtable.(Unknown Source)
at com.me.pack.PrimitiveDraw.main(PrimitiveDraw.java:47)

You can’t insert/pass Map subclasses to Collection subclasses creation bcz they both are fdifferent interfaces.

public interface Collection
extends Iterable
The root interface in the collection hierarchy. A collection represents a group of objects, known as its elements. Some collections allow duplicate elements and others do not. Some are ordered and others unordered. The JDK does not provide any direct implementations of this interface: it provides implementations of more specific subinterfaces like Set and List. This interface is typically used to pass collections around and manipulate them where maximum generality is desired.
Bags or multisets (unordered collections that may contain duplicate elements) should implement this interface directly.

All general-purpose Collection implementation classes (which typically implement Collection indirectly through one of its subinterfaces) should provide two “standard” constructors: a void (no arguments) constructor, which creates an empty collection, and a constructor with a single argument of type Collection, which creates a new collection with the same elements as its argument. In effect, the latter constructor allows the user to copy any collection, producing an equivalent collection of the desired implementation type. There is no way to enforce this convention (as interfaces cannot contain constructors) but all of the general-purpose Collection implementations in the Java platform libraries comply.

The “destructive” methods contained in this interface, that is, the methods that modify the collection on which they operate, are specified to throw UnsupportedOperationException if this collection does not support the operation. If this is the case, these methods may, but are not required to, throw an UnsupportedOperationException if the invocation would have no effect on the collection. For example, invoking the addAll(Collection) method on an unmodifiable collection may, but is not required to, throw the exception if the collection to be added is empty.

Some collection implementations have restrictions on the elements that they may contain. For example, some implementations prohibit null elements, and some have restrictions on the types of their elements. Attempting to add an ineligible element throws an unchecked exception, typically NullPointerException or ClassCastException. Attempting to query the presence of an ineligible element may throw an exception, or it may simply return false; some implementations will exhibit the former behavior and some will exhibit the latter. More generally, attempting an operation on an ineligible element whose completion would not result in the insertion of an ineligible element into the collection may throw an exception or it may succeed, at the option of the implementation. Such exceptions are marked as “optional” in the specification for this interface.

It is up to each collection to determine its own synchronization policy. In the absence of a stronger guarantee by the implementation, undefined behavior may result from the invocation of any method on a collection that is being mutated by another thread; this includes direct invocations, passing the collection to a method that might perform invocations, and using an existing iterator to examine the collection.

Many methods in Collections Framework interfaces are defined in terms of the equals method. For example, the specification for the contains(Object o) method says: “returns true if and only if this collection contains at least one element e such that (o==null ? e==null : o.equals(e)).” This specification should not be construed to imply that invoking Collection.contains with a non-null argument o will cause o.equals(e) to be invoked for any element e. Implementations are free to implement optimizations whereby the equals invocation is avoided, for example, by first comparing the hash codes of the two elements. (The Object.hashCode() specification guarantees that two objects with unequal hash codes cannot be equal.) More generally, implementations of the various Collections Framework interfaces are free to take advantage of the specified behavior of underlying Object methods wherever the implementor deems it appropriate.

Some collection operations which perform recursive traversal of the collection may fail with an exception for self-referential instances where the collection directly or indirectly contains itself. This includes the clone(), equals(), hashCode() and toString() methods. Implementations may optionally handle the self-referential scenario, however most current implementations do not do so.

This interface is a member of the Java Collections Framework.

Implementation Requirements:
The default method implementations (inherited or otherwise) do not apply any synchronization protocol. If a Collection implementation
has a specific synchronization protocol, then it must override default implementations to apply that protocol.

Collection(I) methods:
======================
retainAll():
———–
List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
List seclist=new ArrayList(Arrays.asList(52,25,65));

System.out.println(firstlist.retainAll(seclist));

System.out.println(firstlist);
System.out.println(seclist);
[52, 25, 25, 65]
[52, 25, 65]

List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
List seclist=new ArrayList(Arrays.asList(522565));

System.out.println(firstlist.retainAll(seclist));

System.out.println(firstlist);
System.out.println(seclist);
true
[]
[522565]

It returns matching elemnts in both and remining 1st collection elemtns will get removed. No effect on 2nd collection.
If no matching element then first collection is empty and 2nd is as it is.

removeIf(Predicate..):
———–
List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
List seclist=new ArrayList(Arrays.asList(522565));

//System.out.println(firstlist.retainAll(seclist));

System.out.println(firstlist);
System.out.println(seclist);

firstlist.removeIf(i ->i>3000);
System.out.println(firstlist);

[52, 25, 36, 25, 3696, 155, 65]
[522565]
[52, 25, 36, 25, 155, 65]

toArray() methods:
——————
List firstlist=new ArrayList(Arrays.asList(52,25,36,25,3696,155,65));
It will an array of type Object.
Object[] objects=firstlist.toArray();
System.out.println(objects);

to convert to Specific object type:
System.out.println(firstlist.size());
Integer[] integerArray=new Integer[firstlist.size()];
firstlist.toArray(integerArray);

for(Integer y: integerArray) {
System.out.println(y);
}
System.out.println(integerArray.length);
6
52
25
36
25
155
65
6

if output aray is length is less than size of collection then null will printed in array.
null
null
null
if output array length is more than collection then length + elemtns will be null.
155
65
null
null
null
null
[Ljava.lang.Integer;@e9e54c2
10

Size() => ements in this collection. If this collection contains more than Integer.MAX_VALUE elements, returns Integer.MAX_VALUE.
isEmpty() => if there are no elements ,truye
contains() => uses equals internally.
Returns true if this collection contains the specified element.
More formally, returns true if and only if this collection contains at least one element e such that (o==null ? e==null : o.equals(e)).
iterator() => Used for traversing.
System.out.println(“==============”);
Iterator iterator=firstlist.iterator();
while(iterator.hasNext()) {
System.out.println(iterator.next());
}

ListIterator listIterator=firstlist.listIterator(firstlist.size());
while(listIterator.hasPrevious()) {
System.out.println(listIterator.previous());
}

Enumeration enumeration=Collections.enumeration(firstlist);
while(enumeration.hasMoreElements()) {
System.out.println(enumeration.nextElement());
}

add() =>
UnsupportedOperationException – if the add operation is not supported by this collection
ClassCastException – if the class of the specified element prevents it from being added to this collection
NullPointerException – if the specified element is null and this collection does not permit null elements
IllegalArgumentException – if some property of the element prevents it from being added to this collection
IllegalStateException – if the element cannot be added at this time due to insertion restrictions

remove() => (o==null ? e==null : o.equals(e)) check delete.
addAll(collection c)
removeAll(Collection c)
containsAll(collection c)

clear() => remove all elemnts
Removes all of the elements from this collection (optional operation). The collection will be empty after this method returns.

stream() =>
parallelStream() =>
firstlist.stream().collect(Collectors.toList());

splitIterator() =>

The Collection interface contains methods that perform basic operations, such as int size(), boolean isEmpty(), boolean contains(Object element), boolean add(E element), boolean remove(Object element), and Iterator iterator().

It also contains methods that operate on entire collections, such as boolean containsAll(Collection c), boolean addAll(Collection c), boolean removeAll(Collection c), boolean retainAll(Collection c), and void clear().

Additional methods for array operations (such as Object[] toArray() and T[] toArray(T[] a) exist as well.
In JDK 8 and later, the Collection interface also exposes methods Stream stream() and Stream parallelStream(), for obtaining sequential or parallel streams from the underlying collection. (See the lesson entitled Aggregate Operations for more information about using streams.)

The Collection interface does about what you’d expect given that a Collection represents a group of objects. It has methods that tell you how many elements are in the collection (size, isEmpty), methods that check whether a given object is in the collection (contains), methods that add and remove an element from the collection (add, remove), and methods that provide an iterator over the collection (iterator).

The add method is defined generally enough so that it makes sense for collections that allow duplicates as well as those that don’t. It guarantees that the Collection will contain the specified element after the call completes, and returns true if the Collection changes as a result of the call. Similarly, the remove method is designed to remove a single instance of the specified element from the Collection,
assuming that it contains the element to start with, and to return true if the Collection was modified as a result.

These are but a few examples of what you can do with streams and aggregate operations. For more information and examples, see the lesson entitled Aggregate Operations.

The Collections framework has always provided a number of so-called “bulk operations” as part of its API.
These include methods that operate on entire collections, such as containsAll, addAll, removeAll, etc.
Do not confuse those methods with the aggregate operations that were introduced in JDK 8. The key difference
between the new aggregate operations and the existing bulk operations (containsAll, addAll, etc.) is that the old versions are all
mutative, meaning that they all modify the underlying collection. In contrast, the new aggregate operations do not modify the underlying collection.
When using the new aggregate operations and lambda expressions, you must take care to avoid mutation so as not to introduce problems in the future,
should your code be run later from a parallel stream.

List(I):
========
A List is an ordered Collection (sometimes called a sequence). Lists may contain duplicate elements. In addition to the operations inherited from Collection, the List interface includes operations for the following:

Positional access — manipulates elements based on their numerical position in the list. This includes methods such as get, set, add, addAll, and remove.
Search — searches for a specified object in the list and returns its numerical position. Search methods include indexOf and lastIndexOf.
Iteration — extends Iterator semantics to take advantage of the list’s sequential nature. The listIterator methods provide this behavior.
Range-view — The sublist method performs arbitrary range operations on the list.

remove(0 and removeAll(0 ==> will removes the elemnt from starting onwards.
add,addAll => adds the elemnt at end of the collection.

list1.addAll(list2);
Here’s a nondestructive form of this idiom, which produces a third List consisting of the second list appended to the first.

List list3 = new ArrayList(list1);
list3.addAll(list2);
list 3 is having list1+LIST2

a. remove(int index) : Accept index of object to be removed.
b. remove(Obejct obj) : Accept object to be removed.
REMOVE(1)
remove(new Integer(123));

compare the List(I) elemnts:
—————————
Like the Set interface, List strengthens the requirements on the equals and hashCode methods so that
two List objects can be compared for logical equality without regard to their implementation classes.
Two List objects are equal if they contain the same elements in the same order.

ArrayList integers=new ArrayList(Arrays.asList(12,56,225));
ArrayList integers2=new ArrayList(Arrays.asList(12,56,225));
boolean b = integers2.equals(integers);
System.out.println(b);
—>true

ArrayList integers=new ArrayList(Arrays.asList(12,56,225));
ArrayList integers2=new ArrayList(Arrays.asList(12,225,56));
boolean b = integers2.equals(integers);
System.out.println(b);
–>false

Random class in java:
=====================
int a=(int)new Random().nextInt(2);
System.out.println(a);

if nextInt(0) then error =>
Exception in thread “main” java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Unknown Source)
at com.me.pack.DrawShapes.main(DrawShapes.java:73)

jJava 7 operator:
====================

//only Integers
List integers3=new ArrayList();
//Only Integers as we are limiting ourselves to Integer.
List integers4=new ArrayList();
//Any values including object
List integers5=new ArrayList();
//Any values including object
List integers6=new ArrayList();
/*
* CE
*
* List integers7=new ArrayList();
List integers8=new ArrayList();
*/

integers6.add(new Object());
integers6.add(new Integer(123));

integers5.add(new Object());
integers5.add(new Integer(123));

//integers4.add(new Object());
integers4.add(new Integer(123));

//integers3.add(new Object());
integers.add(new Integer(123));
System.out.println(integers6);
System.out.println(integers5);

java8 -Streams,Piplelines iterators vs foreach – A good tuts:
=============================================================
Lesson: Aggregate Operations
Note: To better understand the concepts in this section, review the sections Lambda Expressions and Method References.

For what do you use collections? You don’t simply store objects in a collection and leave them there. In most cases, you use collections to retrieve items stored in them.

Consider again the scenario described in the section Lambda Expressions. Suppose that you are creating a social networking application. You want to create a feature that enables an administrator to perform any kind of action, such as sending a message, on members of the social networking application that satisfy certain criteria.

As before, suppose that members of this social networking application are represented by the following Person class:

public class Person {

public enum Sex {
MALE, FEMALE
}

String name;
LocalDate birthday;
Sex gender;
String emailAddress;

// …

public int getAge() {
// …
}

public String getName() {
// …
}
}
The following example prints the name of all members contained in the collection roster with a for-each loop:

for (Person p : roster) {
System.out.println(p.getName());
}
The following example prints all members contained in the collection roster but with the aggregate operation forEach:

roster
.stream()
.forEach(e -> System.out.println(e.getName());
Although, in this example, the version that uses aggregate operations is longer than the one that uses a for-each loop, you will see that versions that use bulk-data operations will be more concise for more complex tasks.

The following topics are covered:

Pipelines and Streams
Differences Between Aggregate Operations and Iterators
Find the code excerpts described in this section in the example BulkDataOperationsExamples.

Pipelines and Streams
A pipeline is a sequence of aggregate operations. The following example prints the male members contained in the collection roster with a pipeline that consists of the aggregate operations filter and forEach:

roster
.stream()
.filter(e -> e.getGender() == Person.Sex.MALE)
.forEach(e -> System.out.println(e.getName()));
Compare this example to the following that prints the male members contained in the collection roster with a for-each loop:

for (Person p : roster) {
if (p.getGender() == Person.Sex.MALE) {
System.out.println(p.getName());
}
}
A pipeline contains the following components:

A source: This could be a collection, an array, a generator function, or an I/O channel. In this example, the source is the collection roster.

Zero or more intermediate operations. An intermediate operation, such as filter, produces a new stream.

A stream is a sequence of elements. Unlike a collection, it is not a data structure that stores elements. Instead, a stream carries values from a source through a pipeline. This example creates a stream from the collection roster by invoking the method stream.

The filter operation returns a new stream that contains elements that match its predicate (this operation’s parameter). In this example, the predicate is the lambda expression e -> e.getGender() == Person.Sex.MALE. It returns the boolean value true if the gender field of object e has the value Person.Sex.MALE. Consequently, the filter operation in this example returns a stream that contains all male members in the collection roster.

A terminal operation. A terminal operation, such as forEach, produces a non-stream result, such as a primitive value (like a double value), a collection, or in the case of forEach, no value at all. In this example, the parameter of the forEach operation is the lambda expression e -> System.out.println(e.getName()), which invokes the method getName on the object e. (The Java runtime and compiler infer that the type of the object e is Person.)

The following example calculates the average age of all male members contained in the collection roster with a pipeline that consists of the aggregate operations filter, mapToInt, and average:

double average = roster
.stream()
.filter(p -> p.getGender() == Person.Sex.MALE)
.mapToInt(Person::getAge)
.average()
.getAsDouble();
The mapToInt operation returns a new stream of type IntStream (which is a stream that contains only integer values). The operation applies the function specified in its parameter to each element in a particular stream. In this example, the function is Person::getAge, which is a method reference that returns the age of the member. (Alternatively, you could use the lambda expression e -> e.getAge().) Consequently, the mapToInt operation in this example returns a stream that contains the ages of all male members in the collection roster.

The average operation calculates the average value of the elements contained in a stream of type IntStream. It returns an object of type OptionalDouble. If the stream contains no elements, then the average operation returns an empty instance of OptionalDouble, and invoking the method getAsDouble throws a NoSuchElementException. The JDK contains many terminal operations such as average that return one value by combining the contents of a stream. These operations are called reduction operations; see the section Reduction for more information.

Differences Between Aggregate Operations and Iterators
Aggregate operations, like forEach, appear to be like iterators. However, they have several fundamental differences:

They use internal iteration: Aggregate operations do not contain a method like next to instruct them to process the next element of the collection. With internal delegation, your application determines what collection it iterates, but the JDK determines how to iterate the collection. With external iteration, your application determines both what collection it iterates and how it iterates it. However, external iteration can only iterate over the elements of a collection sequentially. Internal iteration does not have this limitation. It can more easily take advantage of parallel computing, which involves dividing a problem into subproblems, solving those problems simultaneously, and then combining the results of the solutions to the subproblems. See the section Parallelism for more information.

They process elements from a stream: Aggregate operations process elements from a stream, not directly from a collection. Consequently, they are also called stream operations.

They support behavior as parameters: You can specify lambda expressions as parameters for most aggregate operations. This enables you to customize the behavior of a particular aggregate operation.

IllegalArgumentException cases:
===============================
Any API should check the validity of the every parameter of any public method before executing it:

void setPercentage(int pct, AnObject object) {
if( pct 100) {
throw new IllegalArgumentException(“pct has an invalid value”);
}
if (object == null) {
throw new IllegalArgumentException(“object is null”);
}
}

The api doc for IllegalArgumentException is:

Thrown to indicate that a method has been passed an illegal or inappropriate argument.

From looking at how it is used in the jdk libraries, I would say:

It seems like a defensive measure to complain about obviously bad input before the input can get into the works and cause something to fail halfway through with a nonsensical error message.

It’s used for cases where it would be too annoying to throw a checked exception (although it makes an appearance in the java.lang.reflect code, where concern about ridiculous levels of checked-exception-throwing is not otherwise apparent).

I would use IllegalArgumentException to do last-ditch defensive argument-checking for common utilities (trying to stay consistent with the jdk usage), where the expectation is that a bad argument is a programmer error, similar to an NPE. I wouldn’t use it to implement validation in business code.
I certainly wouldn’t use it for the email example.

Modifiable Collections vs Un-Modifiable collections:
=====================================================
Collection.unModifiableCollecton,List,Set,Map,NavigableMap and etc.
The Java Collections Framework provides an easy and simple way to create unmodifiable lists, sets and maps from existing ones, using the Collections‘ unmodifiableXXX() methods (Java Doc). In this article we will discuss how this works and will demonstrate common pitfalls when using such methods.

All code listed below is available at: https://github.com/javacreed/modifying-an-unmodifiable-list, under the collections project. Most of the examples will not contain the whole code and may omit fragments which are not relevant to the example being discussed. The readers can download or view all code from the above link.

Unmodifiable List
Consider the following simple example.

final List modifiable = new ArrayList();
modifiable.add(“Java”);
modifiable.add(“is”);

final List unmodifiable = Collections.unmodifiableList(modifiable);
System.out.println(“Before modification: ” + unmodifiable);

modifiable.add(“the”);
modifiable.add(“best”);

System.out.println(“After modification: ” + unmodifiable);
Here we have a list named: modifiable, from which we create an unmodifiable version using the Collections‘ method unmodifiableList(), named: unmodifiable. As their names imply, one list is modifiable (we can add and remove elements) while the other is read-only.

When executed, the above code will print the following to the command prompt.
When executed, the above code will print the following to the command prompt.

Before modification: [Java, is]
After modification: [Java, is, the, best]
If this was a surprise or an unexpected answer, then you should continue reading this article to find out why this was produced.

Why was the list unmodifiable modified?

Let us first have a look at the Java Doc of the Collections‘ unmodifiableList() method.

public static List unmodifiableList(List list)

Returns an unmodifiable view of the specified list. This method allows modules to provide users with “read-only” access to internal lists. Query operations on the returned list “read through” to the specified list, and attempts to modify the returned list, whether direct or via its iterator, result in an UnsupportedOperationException.

The returned list will be serializable if the specified list is serializable. Similarly, the returned list will implement RandomAccess if the specified list does.

Parameters:
list – the list for which an unmodifiable view is to be returned.

Returns:
an unmodifiable view of the specified list.

The documentation mentions that the returned object (the unmodifiable list, referred to as returned list in this paragraph) is a read-only view of the given one (referred to in this paragraph as the given list). It says nothing about the behaviour we just saw. Well now we know that while we cannot modify the returned list, any changes to the given list are observed by the returned one.
How to prevent modifications to the unmodifiable list?
The solution to this problem is quite simple and is highlighted in the following code.

final List modifiable = new ArrayList();
modifiable.add(“Java”);
modifiable.add(“is”);

// Here we are creating a new array list
final List unmodifiable = Collections.unmodifiableList(new ArrayList(modifiable));
System.out.println(“Before modification: ” + unmodifiable);

modifiable.add(“the”);
modifiable.add(“best”);

System.out.println(“After modification: ” + unmodifiable);
Note that in this example, we are not passing the modifiable list object, but a new list created from this one. When we create a list like this, the elements of the given list are simply copied to the one being created. Therefore, these two lists (the modifiable list and the one just created: new ArrayList(modifiable) list) are disconnected and they only share the elements. Any modifications to each list will not affect the other.

One has to be aware that if we modify any of the elements within any of the lists, then all variables referring to the modified object will observe the modification. This is not the case here as String is an immutable object and thus cannot be changed once created. But if the objects contained by these two lists where mutable, then any changes to the objects from one of the list will be observed in the peer element in the other list.

The above code will produce the following result

Before modification: [Java, is]
After modification: [Java, is]

Iterator vs ListIterator and Enumeration:
=========================================
System.out.println(“==============”);
Iterator iterator=firstlist.iterator();
while(iterator.hasNext()) {
System.out.println(iterator.next());
}

ListIterator listIterator=firstlist.listIterator(firstlist.size());
while(listIterator.hasPrevious()) {
System.out.println(listIterator.previous());
}

Enumeration enumeration=Collections.enumeration(firstlist);
while(enumeration.hasMoreElements()) {
System.out.println(enumeration.nextElement());
}

Show sql queries of hiobenrate in sprng boot:
=============================================
#show sql statement
logging.level.org.hibernate.SQL=debug

#show sql values
logging.level.org.hibernate.type.descriptor.sql=trace

Uber vs Flat jar:
==================
There is no difference whatsoever. These terms are all synonyms of each other.

The term “uber-jar” may be more commonly used in documentations (take the maven-shade-plugin documentation for example) but “fat-jar” is also widely used.

Über is the German word for above or over, as in a line from a previous national anthem: Deutschland, Deutschland, über alles (Germany, Germany above all else).

Hence, in this context, an uber-jar is an “over-jar”, one level up from a simple “jar”, defined as one that contains both your package and all its dependencies in one single JAR file. The name can be thought to come from the same stable as ultrageek, superman, hyperspace, and metadata, which all have similar meanings of “beyond the normal”.

The advantage is that you can distribute your uber-jar and not care at all whether or not dependencies are installed at the destination, as your uber-jar actually has no dependencies.

All the dependencies of your own stuff within the uber-jar are also within that uber-jar. As are all dependencies of those dependencies. And so on.
The fat jar is the jar, which contains classes from all the libraries, on which your project depends and, of course, the classes of current project.

You can create a uber jar by using maven-shade plugin;

org.apache.maven.plugins
maven-shade-plugin
2.4.3

package

shade

com.howtodoinjava.demo.App
1.0

spring boot+rest :
==================

I have selected dependencies like Jersey, Spring Web, Spring HATEOAS, Spring JPA and Spring Security etc. You can add more dependencies after you have downloaded and imported the project or in future when requirements arise.
Convetntion over configuration:
——————————-
Spring Boot uses convention over configuration by scanning the dependent libraries available in the class path. For each spring-boot-starter-* dependency in the POM file, Spring Boot executes a default AutoConfiguration class. AutoConfiguration classes use the *AutoConfiguration lexical pattern, where * represents the library. For example, the autoconfiguration of spring security is done through SecurityAutoConfiguration.

At the same time, if you don’t want to use auto configuration for any project, it makes it very simple. Just use exclude = SecurityAutoConfiguration.class like below.
@SpringBootApplication(exclude=SecurityAutoConfiguration.class)
-> exludes it.

@SpringBootApplication:
========================
@SpringBootApplication Annotation
SpringBootApplication is defined as below:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Inherited
@SpringBootConfiguration
@EnableAutoConfiguration
@ComponentScan(excludeFilters = @Filter(type = FilterType.CUSTOM, classes = TypeExcludeFilter.class))
public @interface SpringBootApplication
{
//more code
}

@ComponerntScan
@SpringBootConfiguration
@EnableAutoConfiguration
@SpringBootConfiguration
@Configuration
public @interface SpringBootConfiguration
{
//more code
}
This annotation adds @Configuration annotation to class which mark the class a source of bean definitions for the application context.

@EnableAutoConfiguration
This tells spring boot to auto configure important bean definitions based on added dependencies in pom.xml by start adding beans based on classpath settings, other beans, and various property settings.

@ComponentScan
This annotation tells spring boot to scan base package, find other beans/components and configure them as well.

Actuators:
==========
In this Spring boot actuator tutorial, learn about in-built HTTP endpoints available for any boot application for different
monitoring and management purposes. Before spring framework, if we had to introduce this type of monitoring functionality in our
applications then we had to manually develop all those components and that too were very specific to our need. But with spring boot
we have Actuator module which makes it very easy.

/env Returns list of properties in current environment
/health Returns application health information.
/auditevents Returns all auto-configuration candidates and the reason why they ‘were’ or ‘were not’ applied.
/beans Returns a complete list of all the Spring beans in your application.
/trace Returns trace logs (by default the last 100 HTTP requests).
/dump It performs a thread dump.

Actuator Security with WebSecurityConfigurerAdapter
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@Configuration
public class SpringSecurityConfig extends WebSecurityConfigurerAdapter {

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser(“admin”).password(“admin”).roles(“ADMIN”);

}
}
CORS support
CORS support is disabled by default and is only enabled once the endpoints.cors.allowed-origins property has been set.

endpoints.cors.allowed-origins = http://example.com
endpoints.cors.allowed-methods = GET,POST

CommandLiner in sprinjg boot:
=============================
Spring boot’s CommandLineRunner interface is used to run a code block only once in application’s lifetime – after application is initialized.
@Component
class ApplicationStartupRunner implements CommandLineRunner {
protected final Log logger = LogFactory.getLog(getClass());

@Override
public void run(String… args) throws Exception {
logger.info(“ApplicationStartupRunner run method Started !!”);
}
}

If yopu have multiple commandLineRunner Implementationpackage com.example.springbootmanagementexample;

import org.springframework.boot.CommandLineRunner;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;

public class ApplicationStartupRunner implements CommandLineRunner {
// protected final Log logger = LogFactory.getLog(getClass());
@Override
public void run(String… args) throws Exception {
System.out.println(“Application Started !!”);
}
}

@Component
@Order(value=2)
class ApplicationStartupRunner2 implements CommandLineRunner {
// protected final Log logger = LogFactory.getLog(getClass());
@Override
public void run(String… args) throws Exception {
System.out.println(“Application Started 2!!”);
}
}

@Component
@Order(value=3)
class ApplicationStartupRunner3 implements CommandLineRunner {
// protected final Log logger = LogFactory.getLog(getClass());
@Override
public void run(String… args) throws Exception {
System.out.println(“Application Started23 !!”);
}
}

change the embedded server from tomcat to jetty:
================================================

org.springframework.boot
spring-boot-starter-web

org.springframework.boot
spring-boot-starter-tomcat

org.springframework.boot
spring-boot-starter-jetty

server.port=8080
server.servlet.context-path=/home

####Jetty specific properties########

server.jetty.acceptors= # Number of acceptor threads to use.
server.jetty.max-http-post-size=0 # Maximum size in bytes of the HTTP post or put content.
server.jetty.selectors= # Number of selector threads to use.
Also, you may configure these options programatically using JettyEmbeddedServletContainerFactory bean.

@Bean
public JettyEmbeddedServletContainerFactory jettyEmbeddedServletContainerFactory() {
JettyEmbeddedServletContainerFactory jettyContainer =
new JettyEmbeddedServletContainerFactory();

jettyContainer.setPort(9000);
jettyContainer.setContextPath(“/home”);
return jettyContainer;
}

port change:
=================
1) Change default server port from application.properties file
You can do lots of wonderful things by simply making few entries in application.properties file in any spring boot application. Changing server port is one of them.

### Default server port #########
server.port=9000
2) Implement EmbeddedServletContainerCustomizer interface
EmbeddedServletContainerCustomizer interface is used for customizing auto-configured embedded servlet containers. Any beans of this type will get a callback with the container factory before the container itself is started, so you can set the port, address, error pages etc.

import org.springframework.boot.context.embedded.ConfigurableEmbeddedServletContainer;
import org.springframework.boot.context.embedded.EmbeddedServletContainerCustomizer;
import org.springframework.stereotype.Component;

@Component
public class AppContainerCustomizer implements EmbeddedServletContainerCustomizer {

@Override
public void customize(ConfigurableEmbeddedServletContainer container) {

container.setPort(9000);

}
}
3) Change server port from command line
If you application is built as uber jar, you may consider this option as well.

java -jar -Dserver.port=9000 spring-boot-demo.jar

change Application context path:
——————————–
@Component
class AppContainerCustomizer implements EmbeddedServletContainerCustomizer {

@Override
public void customize(ConfigurableEmbeddedServletContainer container) {

container.setPort(9010);
//you can set context path

}

http://localhost:PORT/

main app:
package com.example.springbootmanagementexample;

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory;
import org.springframework.boot.web.support.SpringBootServletInitializer;
import org.springframework.context.annotation.Bean;

@SpringBootApplication
public class SpringBootManagementExampleApplication extends
SpringBootServletInitializer implements CommandLineRunner {

public static void main(String[] args) {
SpringApplication.run(SpringBootManagementExampleApplication.class, args);
}

@Override
public void run(String… arg0) throws Exception {
// TODO Auto-generated method stub
System.out.println(“I ran…………..”);
}

@Bean
public ApplicationStartupRunner runner() {
return new ApplicationStartupRunner();
}

@Bean
public ApplicationStartupRunner2 runner2() {
return new ApplicationStartupRunner2();
}
@Bean
public ApplicationStartupRunner3 runner3() {
return new ApplicationStartupRunner3();
}

@Bean
public JettyEmbeddedServletContainerFactory getIt() {
JettyEmbeddedServletContainerFactory containerFactory=
new JettyEmbeddedServletContainerFactory();
containerFactory.setPort(9000);
containerFactory.setContextPath(“/home”);
return containerFactory;

}

@Bean
public AppContainerCustomizer getit() {
return new AppContainerCustomizer();

}
}

http => Https:
==============
keytool -genkey -alias selfsigned_localhost_sslserver -keyalg RSA -keysize 2048 -validity 700 -keypass changeit -storepass changeit -keystore ssl-server.jks

Spring boot SSL Configuration
First we need to copy the generated keystore file (ssl-server.jks) into the resources folder and then open the application.properties and add the below entries.

server.port=8443
server.ssl.key-alias=selfsigned_localhost_sslserver
server.ssl.key-password=changeit
server.ssl.key-store=classpath:ssl-server.jks
server.ssl.key-store-provider=SUN
server.ssl.key-store-type=JKS
That’s all we need to enable https. It’s pretty easy, right? Thanks to spring boot for making everything possible very easily.

Redirect HTTP requests to HTTPS
This is an optional step in case you want to redirect your HTTP traffic to HTTPS, so that the full site becomes secured. To do that in spring boot, we need to add HTTP connector at 8080 port and then we need to set redirect port 8443. So that any request in 8080 through http, it would be automatically redirected to 8443 and https.

To do that you just need to add below configuration.

@Bean
public EmbeddedServletContainerFactory servletContainer() {
TomcatEmbeddedServletContainerFactory tomcat = new TomcatEmbeddedServletContainerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint(“CONFIDENTIAL”);
SecurityCollection collection = new SecurityCollection();
collection.addPattern(“/*”);
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};

tomcat.addAdditionalTomcatConnectors(redirectConnector());
return tomcat;
}

private Connector redirectConnector() {
Connector connector = new Connector(“org.apache.coyote.http11.Http11NioProtocol”);
connector.setScheme(“http”);
connector.setPort(8080);
connector.setSecure(false);
connector.setRedirectPort(8443);

return connector;
}

getAll beans:
===============
@Autowired
private ApplicationContext appContext;

@Override
public void run(String… args) throws Exception
{
String[] beans = appContext.getBeanDefinitionNames();
Arrays.sort(beans);
for (String bean : beans)
{
System.out.println(bean + ” of Type :: ” + appContext.getBean(bean).getClass());
}
}

PropertyEditor:
===============
Spring has a number of built-in PropertyEditors in the org.springframework.beans.propertyeditors package e.g. for Boolean, Currency, and URL. Some of these editors are registered by default, while some you need to when required.

You can also create custom PropertyEditor in case – default property editors do not serve the purpose. Let’s say we are creating an application for books management. Now people can search the books by ISBN as well. Also, you will need to display isbn details in webpage.

@RequestMapping(value = “/books/{isbn}”, method = RequestMethod.GET)
public String getBook(@PathVariable Isbn isbn, Map model)
{
LOGGER.info(“You searched for book with ISBN :: ” + isbn.getIsbn());
model.put(“isbn”, isbn);
return “index”;
}

@InitBinder
public void initBinder(WebDataBinder binder) {
binder.registerCustomEditor(Isbn.class, new IsbnEditor());
}

these are not thred safe.

Scheduling in spring boot:
==========================
Add @EnableScheduling annotation to your spring boot application class. @EnableScheduling is a Spring Context module annotation. It internally imports the SchedulingConfiguration via the @Import(SchedulingConfiguration.class) instruction

@SpringBootApplication
@EnableScheduling
public class SpringBootWebApplication {

}

Encountered invalid @Scheduled method ‘run’: Only no-arg methods.
4:59:06.227 INFO 6288 — [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped “{[/error]}” onto public org.springframework.http.ResponseEntity<java.util.Map> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2018-03-11 14:59:06.227 INFO 6288 — [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped “{[/error],produces=[text/html]}” onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-03-11 14:59:06.265 INFO 6288 — [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-03-11 14:59:06.265 INFO 6288 — [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-03-11 14:59:06.280 INFO 6288 — [ main] .m.m.a.ExceptionHandlerExceptionResolver : Detected @ExceptionHandler methods in repositoryRestExceptionHandler
2018-03-11 14:59:06.369 INFO 6288 — [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-03-11 14:59:06.872 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@197d671: startup date [Sun Mar 11 14:59:01 IST 2018]; root of context hierarchy
2018-03-11 14:59:06.885 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.Resource> org.springframework.data.rest.webmvc.RepositoryEntityController.getItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.886 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[PUT],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.putItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.886 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.887 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.headCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.887 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.hateoas.Resources org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.887 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}” onto public org.springframework.hateoas.Resources org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResourceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.888 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.headForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.888 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}],methods=[POST],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.postCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.889 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.889 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[PATCH],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.patchItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException,org.springframework.data.rest.webmvc.ResourceNotFoundException
2018-03-11 14:59:06.890 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}],methods=[DELETE],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryEntityController.deleteItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.support.ETag) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-03-11 14:59:06.892 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/ || ],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositoryController.listRepositories()
2018-03-11 14:59:06.893 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/ || ],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryController.headForRepositories()
2018-03-11 14:59:06.893 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/ || ],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositoryController.optionsForRepositories()
2018-03-11 14:59:06.895 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReferenceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-03-11 14:59:06.895 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[PATCH || PUT || POST],consumes=[application/json || application/x-spring-data-compact+json || text/uri-list],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.createPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpMethod,org.springframework.hateoas.Resources,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-03-11 14:59:06.896 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}/{propertyId}],methods=[DELETE],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReferenceId(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String) throws java.lang.Exception
2018-03-11 14:59:06.896 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[DELETE],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-03-11 14:59:06.897 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}/{propertyId}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-03-11 14:59:06.897 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/{id}/{property}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-03-11 14:59:06.899 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositorySearchController.headForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-03-11 14:59:06.899 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-03-11 14:59:06.900 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search],methods=[HEAD],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositorySearchController.headForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.901 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.data.rest.webmvc.RepositorySearchesResource org.springframework.data.rest.webmvc.RepositorySearchController.listSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.901 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[GET],produces=[application/hal+json || application/json]}” onto public org.springframework.http.ResponseEntity org.springframework.data.rest.webmvc.RepositorySearchController.executeSearch(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.util.MultiValueMap,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders)
2018-03-11 14:59:06.902 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search/{search}],methods=[GET],produces=[application/x-spring-data-compact+json]}” onto public org.springframework.hateoas.ResourceSupport org.springframework.data.rest.webmvc.RepositorySearchController.executeSearchCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpHeaders,org.springframework.util.MultiValueMap,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler)
2018-03-11 14:59:06.902 INFO 6288 — [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped “{[/{repository}/search],methods=[OPTIONS],produces=[application/hal+json || application/json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.905 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile/{repository}],methods=[GET],produces=[application/schema+json]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.RepositorySchemaController.schema(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.906 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile],methods=[OPTIONS]}” onto public org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.ProfileController.profileOptions()
2018-03-11 14:59:06.906 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile],methods=[GET]}” onto org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.ProfileController.listAllFormsOfMetadata()
2018-03-11 14:59:06.908 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile/{repository}],methods=[GET],produces=[application/alps+json || */*]}” onto org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.alps.AlpsController.descriptor(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-03-11 14:59:06.908 INFO 6288 — [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped “{[/profile/{repository}],methods=[OPTIONS],produces=[application/alps+json]}” onto org.springframework.http.HttpEntity org.springframework.data.rest.webmvc.alps.AlpsController.alpsOptions()
2018-03-11 14:59:06.965 INFO 6288 — [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped “{[/actuator/health],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}” onto public java.lang.Object org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(javax.servlet.http.HttpServletRequest,java.util.Map)
2018-03-11 14:59:06.967 INFO 6288 — [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped “{[/actuator/info],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}” onto public java.lang.Object org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(javax.servlet.http.HttpServletRequest,java.util.Map)
2018-03-11 14:59:06.968 INFO 6288 — [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped “{[/actuator],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}” onto protected java.util.Map<java.lang.String, java.util.Map> org.springframework.boot.actuate.endpoint.web.servlet.WebMvcEndpointHandlerMapping.links(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-03-11 14:59:07.062 INFO 6288 — [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-03-11 14:59:07.087 INFO 6288 — [ main] s.a.ScheduledAnnotationBeanPostProcessor : No TaskScheduler/ScheduledExecutorService bean found for scheduled processing
2018-03-11 14:59:07.128 INFO 6288 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ”
2018-03-11 14:59:07.146 INFO 6288 — [ main] com.me.pack.MetricsActuatorApplication : Started MetricsActuatorApplication in 6.104 seconds (JVM running for 6.918)
Sun Mar 11 14:59:07 IST 2018
Sun Mar 11 14:59:08 IST 2018
Sun Mar 11 14:59:08 IST 2018
Sun Mar 11 14:59:09 IST 2018
Sun Mar 11 14:59:09 IST 2018
Sun Mar 11 14:59:10 IST 2018
Sun Mar 11 14:59:10 IST 2018
Sun Mar 11 14:59:11 IST 2018
Sun Mar 11 14:59:11 IST 2018
Sun Mar 11 14:59:12 IST 2018
Sun Mar 11 14:59:12 IST 2018
Sun Mar 11 14:59:13 IST 2018
Sun Mar 11 14:59:13 IST 2018

package com.me.pack;

import java.util.Date;

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;

@SpringBootApplication
@EnableScheduling
public class MetricsActuatorApplication {

public static void main(String[] args) {
SpringApplication.run(MetricsActuatorApplication.class, args);
}

@Scheduled(initialDelay=500, fixedDelay=500)
public void run() throws Exception {
// TODO Auto-generated method stub
System.out.println(new Date());
}
}

Now you can add @Scheduled annotations on methods which you want to scedule. Only condition is that methods should be without arguments.

ScheduledAnnotationBeanPostProcessor that will be created by the imported SchedulingConfiguration scans all declared beans for the presence of the @Scheduled annotations.

For every annotated method without arguments, the appropriate executor thread pool will be created. This thread pool will manage the scheduled invocation of the annotated method.

@Scheduled(initialDelay = 1000, fixedRate = 10000)
public void run() {
logger.info(“Current time is :: ” + Calendar.getInstance().getTime());
}

Add Jersey config related stuff:
==================================
package com.me.pack;

import org.glassfish.jersey.server.ResourceConfig;
import org.springframework.stereotype.Component;

@Component
public class JerseyConfig extends ResourceConfig{
public JerseyConfig() {
register(PlacesService.class);
}
}

JMS with default active MQ:
==========================
package com.me.pack;

import javax.jms.ConnectionFactory;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jms.DefaultJmsListenerContainerFactoryConfigurer;
import org.springframework.context.annotation.Bean;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.config.JmsListenerContainerFactory;
import org.springframework.jms.support.converter.MappingJackson2MessageConverter;
import org.springframework.jms.support.converter.MessageConverter;
import org.springframework.jms.support.converter.MessageType;

@SpringBootApplication
@EnableJms
public class MsgApplication {

public static void main(String[] args) {
SpringApplication.run(MsgApplication.class, args);
}

@Bean
public JmsListenerContainerFactory myFactory(
ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer)
{
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot’s default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot’s default if necessary.
return factory;
}

@Bean
public MessageConverter jacksonJmsMessageConverter()
{
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName(“_type”);
return converter;
}
}

package com.me.pack;

import org.springframework.jms.annotation.JmsListener;
import org.springframework.stereotype.Component;

@Component
public class MessageListener {

@JmsListener(destination=”jms.message.endpoint”)
public void recieveMessage(Message message) {
System.out.println(“Message recieved: “+message);
}
}

package com.me.pack;

import java.util.Date;

import org.springframework.boot.SpringApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.jms.core.JmsTemplate;

public class TestClass {
public static void main(String[] args) {
ConfigurableApplicationContext context=
SpringApplication.run(MsgApplication.class, args);
JmsTemplate jmsTemplate=context.getBean(JmsTemplate.class);
jmsTemplate.convertAndSend(
“jms.message.endpoint”, new Message(1001, “test body”, new Date()));
}
}

2018-03-11 16:14:56.093 INFO 14320 — [ main] com.me.pack.TestClass : Started TestClass in 3.582 seconds (JVM running for 4.634)
Message recieved: Message [id=1001, message=test body, date=Sun Mar 11 16:14:56 IST 2018]

@EnableJms
@JmsListener

jSP view:
=========
@Controller
public class IndexController {

@RequestMapping(“/”)
public String home(Map model) {
model.put(“message”, “HowToDoInJava Reader !!”);
return “index”;
}

@RequestMapping(“/next”)
public String next(Map model) {
model.put(“message”, “You are in new page !!”);
return “next”;
}

add prefix and suffix here:
spring.mvc.view.prefix=/WEB-INF/view/
spring.mvc.view.suffix=.jsp

//For detailed logging during development

logging.level.org.springframework=TRACE
logging.level.com=TRACE

@Configuration
@EnableWebMvc
@ComponentScan
public class MvcConfiguration extends WebMvcConfigurerAdapter
{
@Override
public void configureViewResolvers(ViewResolverRegistry registry) {
InternalResourceViewResolver resolver = new InternalResourceViewResolver();
resolver.setPrefix(“/WEB-INF/view/”);
resolver.setSuffix(“.jsp”);
resolver.setViewClass(JstlView.class);
registry.viewResolver(resolver);
}
}

lly to websecurityconfigureradpter -. configure method.

====================
Logging with YML:
—————————-
%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta}
%clr{—}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan}
%clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}
The above pattern print these listed log message parts with respective color coding applied:

Date and Time — Millisecond precision.
Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
Process ID.
A — separator to distinguish the start of actual log messages.
Thread name — Enclosed in square brackets (may be truncated for console output).
Logger name — This is usually the source class name (often abbreviated).
The log message

Color-coded logging output
If your terminal supports ANSI, color output will be used to aid readability. You can set spring.output.ansi.enabled value to either ALWAYS, NEVER or DETECT.

spring:
output:
ansi:
enabled: DETECT
Color coding is configured using the %clr conversion word. In its simplest form the converter will color the output according to the log level.

Home / Spring / Spring Boot / Spring Boot Logging with application.yml
Spring Boot Logging with application.yml
March 4, 2017 by Lokesh Gupta

Learn spring boot logging configuration via application.yml file in simple and easy to follow instructions. In the default structure of a Spring Boot web application, you can locate the application.yml file under the resources folder.

Read More: Spring Boot Logging with application.properties
Table of Contents

Understand default spring boot logging
Set logging level
Set logging pattern
Set logging output to file
Using active profiles to load environment specific logging configuration
Color-coded logging output

Understand default spring boot logging
To understand default spring boot logging, lets put logs in spring boot hello world example. Just to mention, there is no logging related configuration in application.yml file. If you see any configuration in downloaded application, please remove it.

private final Logger LOGGER = LoggerFactory.getLogger(this.getClass());

@RequestMapping(“/”)
public String home(Map model) {

LOGGER.debug(“This is a debug message”);
LOGGER.info(“This is an info message”);
LOGGER.warn(“This is a warn message”);
LOGGER.error(“This is an error message”);

model.put(“message”, “HowToDoInJava Reader !!”);
return “index”;
}
Start the application. Access application at browser and verify log messages in console.

2017-03-02 23:33:51.318 INFO 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:33:51.319 WARN 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:33:51.319 ERROR 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed
Note down the observation that Default logging level is INFO – because debug log message is not present.
There is fixed default log message pattern which is configured in different base configuration files.

%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta}
%clr{—}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan}
%clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}
The above pattern print these listed log message parts with respective color coding applied:

Date and Time — Millisecond precision.
Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
Process ID.
A — separator to distinguish the start of actual log messages.
Thread name — Enclosed in square brackets (may be truncated for console output).
Logger name — This is usually the source class name (often abbreviated).
The log message

Set logging level
When a message is logged via a Logger it is logged with a certain log level. In the application.yml file, you can define log levels of Spring Boot loggers, application loggers, Hibernate loggers, Thymeleaf loggers, and more. To set the logging level for any logger, add keys starting with logging.level.

Logging level can be one of one of TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF. The root logger can be configured using logging.level.root.

logging:
level:
root: ERROR
org.springframework.web: ERROR
com.howtodoinjava: DEBUG
org.hibernate: ERROR
In above configuration, I upgraded log level for application classes to DEBUG (from default INFO). Now observe the logs:

2017-03-02 23:57:14.966 DEBUG 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : debug log statement printed
2017-03-02 23:57:14.967 INFO 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:57:14.967 WARN 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:57:14.967 ERROR 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed

Set logging pattern
To change the loging patterns, use logging.pattern.console and logging.pattern.file keys.

logging:
pattern:
console: %d{yyyy-MM-dd HH:mm:ss} – %msg%n
file: %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} – %msg%n
After changing console logging pattern in application, log statements are printed as below:

2017-03-03 12:59:13 – This is a debug message
2017-03-03 12:59:13 – This is an info message
2017-03-03 12:59:13 – This is a warn message
2017-03-03 12:59:13 – This is an error message

Set logging output to file
To print the logs in file, use logging.file or logging.path key.

logging:
file: /logs/application-debug.log
Verify the logs in file.

2017-03-03 13:02:50.608 DEBUG 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a debug message
2017-03-03 13:02:50.608 INFO 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an info message
2017-03-03 13:02:50.608 WARN 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a warn message
2017-03-03 13:02:50.609 ERROR 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an error message

Using active profiles to load environment specific logging configuration
It is desirable to have multiple configurations for any application – where each configuration is specific to a particular runtime environment. In spring boot, you can achieve this by creating multiple application-{profile}.yml files in same location as application.yml file.

Profile specific keys always override the non-profile specific ones. If several profiles are specified, a last wins strategy applies.

If I have two environments for my application i.e. prod and dev. Then I will create two profile specific yml files.

application-dev.yml

logging:
file: logs/application-debug.log
pattern:
console: “%d %-5level %logger : %msg%n”
file: “%d %-5level [%thread] %logger : %msg%n”
level:
org.springframework.web: ERROR
com.howtodoinjava: DEBUG
org.hibernate: ERROR
application-prod.yml

logging:
file: logs/application-debug.log
pattern:
console: “%d %-5level %logger : %msg%n”
file: “%d %-5level [%thread] %logger : %msg%n”
level:
org.springframework.web: ERROR
com.howtodoinjava: INFO
org.hibernate: ERROR
To supply profile information to application, key spring.profiles.active is passed to runtime.

$ java -jar -Dspring.profiles.active=prod spring-boot-demo.jar

==> Set profiles via cmd at the time of launching:
$ java -jar -Dspring.profiles.active=prod spring-boot-demo.jar

with properties:
==================
Understand default spring boot logging
Set logging level
Set logging pattern
Set logging output to file
Using active profiles to load environment specific logging configuration
Color-coded logging output

Understand default spring boot logging
To understand default spring boot logging, lets put logs in spring boot hello world example. Just to mention, there is no logging related configuration in application.properties file. If you see any configuration in downloaded application, please remove it.

private final Logger LOGGER = LoggerFactory.getLogger(this.getClass());

@RequestMapping(“/”)
public String home(Map model) {

LOGGER.debug(“This is a debug message”);
LOGGER.info(“This is an info message”);
LOGGER.warn(“This is a warn message”);
LOGGER.error(“This is an error message”);

model.put(“message”, “HowToDoInJava Reader !!”);
return “index”;
}
Start the application. Access application at browser and verify log messages in console.

2017-03-02 23:33:51.318 INFO 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:33:51.319 WARN 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:33:51.319 ERROR 3060 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed
Note down the observation that Default logging level is INFO – because debug log message is not present.
There is fixed default log message pattern which is configured in different base configuration files.

%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta}
%clr{—}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan} %clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}
The above pattern print these listed log message parts with respective color coding applied:

Date and Time — Millisecond precision.
Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
Process ID.
A — separator to distinguish the start of actual log messages.
Thread name — Enclosed in square brackets (may be truncated for console output).
Logger name — This is usually the source class name (often abbreviated).
The log message

Set logging level
When a message is logged via a Logger it is logged with a certain log level. In the application.properties file, you can define log levels of Spring Boot loggers, application loggers, Hibernate loggers, Thymeleaf loggers, and more. To set the logging level for any logger, add properties starting with logging.level.

Logging level can be one of one of TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF. The root logger can be configured using logging.level.root.

#logging.level.root=WARN

logging.level.org.springframework.web=ERROR
logging.level.com.howtodoinjava=DEBUG
In above configuration, I upgraded log level for application classes to DEBUG (from default INFO). Now observe the logs:

2017-03-02 23:57:14.966 DEBUG 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : debug log statement printed
2017-03-02 23:57:14.967 INFO 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : info log statement printed
2017-03-02 23:57:14.967 WARN 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : warn log statement printed
2017-03-02 23:57:14.967 ERROR 4092 — [nio-8080-exec-1] c.h.app.controller.IndexController : error log statement printed

Set logging pattern
To change the loging patterns, use logging.pattern.console and logging.pattern.file properties.

# Logging pattern for the console
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} – %msg%n

# Logging pattern for file
logging.pattern.file=%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} – %msg%n
After changing console logging pattern in application, log statements are printed as below:

2017-03-03 12:59:13 – This is a debug message
2017-03-03 12:59:13 – This is an info message
2017-03-03 12:59:13 – This is a warn message
2017-03-03 12:59:13 – This is an error message

Set logging output to file
To print the logs in file, use logging.file or logging.path property.

logging.file=c:/users/howtodoinjava/application-debug.log
Verify the logs in file.

2017-03-03 13:02:50.608 DEBUG 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a debug message
2017-03-03 13:02:50.608 INFO 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an info message
2017-03-03 13:02:50.608 WARN 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is a warn message
2017-03-03 13:02:50.609 ERROR 10424 — [http-nio-8080-exec-1] c.h.app.controller.IndexController : This is an error message

Using active profiles to load environment specific logging configuration
It is desirable to have multiple configurations for any application – where each configuration is specific to a particular runtime environment. In spring boot, you can achieve this by creating multiple application-{profile}.properties files in same location as application.properties file.

Profile specific properties always override the non-profile specific ones. If several profiles are specified, a last wins strategy applies.

If I have two environments for my application i.e. prod and dev. Then I will create two profile specific properties files.

application-dev.properties

logging.level.com.howtodoinjava=DEBUG
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} – %msg%n
application-prod.properties

logging.level.com.howtodoinjava=ERROR
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} – %msg%n
To supply profile information to application, property spring.profiles.active is passed to runtime.

$ java -jar -Dspring.profiles.active=prod spring-boot-demo.jar

Color-coded logging output
If your terminal supports ANSI, color output will be used to aid readability. You can set spring.output.ansi.enabled value to either ALWAYS, NEVER or DETECT.

Color coding is configured using the %clr conversion word. In its simplest form the converter will color the output according to the log level.

FATAL and ERROR – Red
WARN – Yellow
INFO, DEBUG and TRACE – Green

Drop me your questions in comments section.

Happy Learning !!

References:

Security ion webservice by ContainerrequestFilter:
==================================================
arn to create JAX-RS 2.0 REST APIs using Spring Boot and Jersey framework, and add role based security using JAX-RS annotations e.g. @PermitAll, @RolesAllowed or @DenyAll.

at webmethod we have to place @ApermiltAll,@RolesAllowed.
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.Base64;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.StringTokenizer;

import javax.annotation.security.DenyAll;
import javax.annotation.security.PermitAll;
import javax.annotation.security.RolesAllowed;
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ResourceInfo;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.Provider;

/**
* This filter verify the access permissions for a user based on
* user name and password provided in request
* */
@Provider
public class SecurityFilter implements javax.ws.rs.container.ContainerRequestFilter
{
private static final String AUTHORIZATION_PROPERTY = “Authorization”;
private static final String AUTHENTICATION_SCHEME = “Basic”;
private static final Response ACCESS_DENIED = Response.status(Response.Status.UNAUTHORIZED).build();
private static final Response ACCESS_FORBIDDEN = Response.status(Response.Status.FORBIDDEN).build();
private static final Response SERVER_ERROR = Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();

@Context
private ResourceInfo resourceInfo;

@Override
public void filter(ContainerRequestContext requestContext)
{
Method method = resourceInfo.getResourceMethod();
//Access allowed for all
if( ! method.isAnnotationPresent(PermitAll.class))
{
//Access denied for all
if(method.isAnnotationPresent(DenyAll.class))
{
requestContext.abortWith(ACCESS_FORBIDDEN);
return;
}

//Get request headers
final MultivaluedMap headers = requestContext.getHeaders();

//Fetch authorization header
final List authorization = headers.get(AUTHORIZATION_PROPERTY);

//If no authorization information present; block access
if(authorization == null || authorization.isEmpty())
{
requestContext.abortWith(ACCESS_DENIED);
return;
}

//Get encoded username and password
final String encodedUserPassword = authorization.get(0).replaceFirst(AUTHENTICATION_SCHEME + ” “, “”);

//Decode username and password
String usernameAndPassword = null;
try {
usernameAndPassword = new String(Base64.getDecoder().decode(encodedUserPassword));
} catch (Exception e) {
requestContext.abortWith(SERVER_ERROR);
return;
}

//Split username and password tokens
final StringTokenizer tokenizer = new StringTokenizer(usernameAndPassword, “:”);
final String username = tokenizer.nextToken();
final String password = tokenizer.nextToken();

//Verifying Username and password
if(!(username.equalsIgnoreCase(“admin”) && password.equalsIgnoreCase(“password”))){
requestContext.abortWith(ACCESS_DENIED);
return;
}

//Verify user access
if(method.isAnnotationPresent(RolesAllowed.class))
{
RolesAllowed rolesAnnotation = method.getAnnotation(RolesAllowed.class);
Set rolesSet = new HashSet(Arrays.asList(rolesAnnotation.value()));

//Is user valid?
if( ! isUserAllowed(username, password, rolesSet))
{
requestContext.abortWith(ACCESS_DENIED);
return;
}
}
}
}
private boolean isUserAllowed(final String username, final String password, final Set rolesSet)
{
boolean isAllowed = false;

//Step 1. Fetch password from database and match with password in argument
//If both match then get the defined role for user from database and continue; else return isAllowed [false]
//Access the database and do this part yourself
//String userRole = userMgr.getUserRole(username);
String userRole = “ADMIN”;

//Step 2. Verify user role
if(rolesSet.contains(userRole))
{
isAllowed = true;
}
return isAllowed;
}
}

@Grab(‘spring-boot-starter-thymeleaf’)
@RunWith(SpringJunitRunner.class)
@SpringBootApplication
@SpringApplicationConfiguration(classes= MyApplication.class)
@WebApplicationConfiguration
@EnableAutoConfiguration
@RestController
@RequestMapping
@responseBody
@Component
@Service
@Controller
@Repository
@CacheResult
@EnableWebSecurity
@EnableJms
@Test
@EnableEurekaClient
@EnableEurekaServer
@ControllerAdvice
@Configuration
@EnableSwagger2
@Api
@ApiOperation
@ApiParam
@ApiResponse
@ClassConfigurations
@RunWith(SpringRunner.class)
@SpringBootTest(
webEnvironment = WebEnvironment.RANDOM_PORT,
classes = Application.class)
@AutoConfigureMockMvc
@TestPropertySource(
locations = “classpath:application-integrationtest.properties”)
@MockMVC
@WebMVcTest
@MoavkBean
@TestConfiguration
@DataJpaTest
@EnableAutoConfiguration(exclude={
DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class
})

@Cacheable
@EnableScheduling
@Scheduled
@CacheEvict
@CachePut
@CachePut(“res.value>10”}

If you want or need to work with a Java array then you can always use
the java.util.Arrays utility classes’ static asList() method to convert your array to a List.

Something along those lines should work.

String mStringArray[] = { “String1”, “String2” };

JSONArray mJSONArray = new JSONArray(Arrays.asList(mStringArray));
Beware that code is written offhand so consider it pseudo-code.

http://www.baeldung.com/spring-cache-tutorial

Heap => Newley created objects.
Heap is having 2 generations => YoungGen and OldSpace
YoungGen => all newley objects are intially goes to Younggen then if it is full then GC going to be performed.
=> All surviving objects are then moved to Old Generation after survival.
=> If old memory is full then it will perform Old Collection.

Heap:
YoungGen/OldGen
The reasoning behind a nursery is that most objects are temporary and short lived.
A young collection is designed to be swift at finding newly allocated objects that are still alive and moving them away from the nursery.
Typically, a young collection frees a given amount of memory much faster than an old collection or a garbage collection of a
single-generational heap (a heap without a nursery)
Keep Area:
a part of the nursery is reserved as a keep area. The keep area contains the most recently allocated objects in the nursery
and is not garbage collected until the next young collection. This prevents objects from being promoted just because they were
allocated right before a young collection started.
During object allocation, the JRockit JVM distinguishes between small and large objects. The limit for when an object is considered large depends on the JVM version, the heap size, the garbage collection strategy and the platform used, but is usually somewhere between 2 and 128 kB. Please see the documentation for -XXtlaSize and -XXlargeObjectLimit for more information.

Small objects are allocated in thread local areas (TLAs). The thread local areas are free chunks reserved from the heap and given to a Java thread for exclusive use. The thread can then allocate objects in its TLA without synchronizing with other threads. When the TLA becomes full, the thread simply requests a new TLA. The TLAs are reserved from the nursery if such exists, otherwise they are reserved anywhere in the heap.

Large objects that don’t fit inside a TLA are allocated directly on the heap. When a nursery is used, the large objects are allocated directly in old space. Allocation of large objects requires more synchronization between the Java threads, although the JRockit JVM uses a system of caches of free chunks of different sizes to reduce the need for synchronization and improve the allocation speed.

Objects that are allocated next to each other will not necessarily become unreachable (“die”) at the same time. This means that the heap may become fragmented after a garbage collection, so that the free spaces in the heap are many but small, making allocation of large objects hard or even impossible. Free spaces that are smaller than the minimum thread local area (TLA) size can not be used at all, and the garbage collector discards them as dark matter until a future garbage collection frees enough space next to them to create a space large enough for a TLA.

To reduce fragmentation, the JRockit JVM compacts a part of the heap at every garbage collection (old collection). Compaction moves objects closer together and further down in the heap, thus creating larger free areas near the top of the heap. The size and position of the compaction area as well as the compaction method is selected by advanced heuristics, depending on the garbage collection mode used.

Compaction is performed at the beginning of or during the sweep phase and while all Java threads are paused.

For information on how to tune compaction, see Tuning the Compaction of Memory.

The stack is a part of memory that contains information about nested method calls down to the current position in the program.
It also contains all local variables and references to objects on the heap defined in currently executing methods.

This structure allows the runtime to return from the method knowing the address whence it was called, and also clear all local variables after exiting the method. Every thread has its own stack.

The heap is a large bulk of memory intended for allocation of objects. When you create an object with the new keyword, it gets allocated on the heap. However, the reference to this object lives on the stack.

1) One of the obvious difference between Serializable and Externalizable is that Serializable is a marker interface i.e. does not contain any method but Externalizable interface contains two methods writeExternal() and readExternal().

2) The second difference between Serializable vs Externalizable is responsibility of Serialization. when a class implements Serializable interface, default Serialization process gets kicked of and that takes responsibility of serializing super class state. When any class in Java implement java.io.Externalizable than its your responsibility to implement Serialization process i.e. preserving all important information.

3) This difference between Serializable and Externalizable is performance. You can not do much to improve performance of default serialization process except reducing number of fields to be serialized by using transient and static keyword but with Externalizable interface you have full control over Serialization process.

4) Another important difference between Serializable and Externalizable interface is maintenance. When your Java class implements Serializable interface its tied with default representation which is fragile and easily breakable if structure of class changes e.g. adding or removing field. By using java.io.Externalizable interface you can create your own custom binary format for your object.

These are some of the important difference between Serializable vs Externalizable interface in Java. It’s even better if you try to write Java program one with Serializable interface and other implementing Externalizable interface. Then you can evaluate all these differences between them

Read more: http://www.java67.com/2012/10/difference-between-serializable-vs-externalizable-interface.html#ixzz593hZGA96

126
down vote
JSON in Java has some great resources.

Maven dependency:

org.json
json
20171018

XML.java is the class you’re looking for:

import org.json.JSONObject;
import org.json.XML;

public class Main {

public static int PRETTY_PRINT_INDENT_FACTOR = 4;
public static String TEST_XML_STRING =
“Turn this to JSON”;

public static void main(String[] args) {
try {
JSONObject xmlJSONObj = XML.toJSONObject(TEST_XML_STRING);
String jsonPrettyPrintString = xmlJSONObj.toString(PRETTY_PRINT_INDENT_FACTOR);
System.out.println(jsonPrettyPrintString);
} catch (JSONException je) {
System.out.println(je.toString());
}
}
}
Output is:

{“test”: {
“attrib”: “moretest”,
“content”: “Turn this to JSON”
}}

https://github.com/google/gson -> java to json and json to java

https://tecadmin.net/crontab-in-linux-with-20-examples-of-cron-schedule/#

DROP TABLE IF EXISTS customer;
CREATE TABLE customer (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL,
created_date DATE NOT NULL,
PRIMARY KEY (id));

org.springframework.boot
spring-boot-maven-plugin

maven-compiler-plugin
3.1

true
C:\Program Files\Java\jdk1.8.0_161\bin\javac.exe

down vote
Here in simpler words:

DOM

Tree model parser (Object based) (Tree of nodes).

DOM loads the file into the memory and then parse- the file.

Has memory constraints since it loads the whole XML file before parsing.

DOM is read and write (can insert or delete nodes).

If the XML content is small, then prefer DOM parser.

Backward and forward search is possible for searching the tags and evaluation of the information inside the tags. So this gives the ease of navigation.

Slower at run time.

SAX

Event based parser (Sequence of events).

SAX parses the file as it reads it, i.e. parses node by node.

No memory constraints as it does not store the XML content in the memory.

SAX is read only i.e. can’t insert or delete the node.

Use SAX parser when memory content is large.

SAX reads the XML file from top to bottom and backward navigation is not possible.

Faster at run time.

262
down vote
accepted
Well, you are close.

In SAX, events are triggered when the XML is being parsed. When the parser is parsing the XML, and encounters a tag starting (e.g. ), then it triggers the tagStarted event (actual name of event might differ). Similarly when the end of the tag is met while parsing (), it triggers tagEnded. Using a SAX parser implies you need to handle these events and make sense of the data returned with each event.

In DOM, there are no events triggered while parsing. The entire XML is parsed and a DOM tree (of the nodes in the XML) is generated and returned. Once parsed, the user can navigate the tree to access the various data previously embedded in the various nodes in the XML.

In general, DOM is easier to use but has an overhead of parsing the entire XML before you can start using it.

shareimprove this answer

Tomcat pooling, HikariCP, Commons DBCP and Commons DBCP2. –> by default Tomcat

public abstract class SpringBootServletInitializer
extends Object
A handy opinionated WebApplicationInitializer for applications that starts a Spring Boot application and lets it bind to the servlet and filter mappings. If your application is more complicated consider using one of the other WebApplicationInitializers.
Note that a WebApplicationInitializer is only needed if you are building a war file and deploying it. If you prefer to run an embedded container (we do) then you won’t need this at all.

Integration testing with SB:
——————————
@RunWith(SpringRunner.class)
@SpringBootTest(
webEnvironment = WebEnvironment.RANDOM_PORT,
classes = Application.class)
@AutoConfigureMockMvc
@TestPropertySource(
locations = “classpath:application-integrationtest.properties”)

@RunWith(SpringRunner.class)
@WebMvcTest(EmployeeRestController.class)
public class EmployeeRestControllerTest {

@Autowired
private MockMvc mvc;

@MockBean
private EmployeeService service;

// write test cases here
}

@RunWith(SpringRunner.class) is used to provide a bridge between Spring Boot test features and JUnit. Whenever we are using any Spring Boot testing features in out JUnit tests, this annotation will be required.

@DataJpaTest provides some standard setup needed for testing the persistence layer:

configuring H2, an in-memory database
setting Hibernate, Spring Data, and the DataSource
performing an @EntityScan
turning on SQL logging

The given part describes the state of the world before you begin the behavior you’re specifying in this scenario. You can think of it as the pre-conditions to the test.
The when section is that behavior that you’re specifying.
Finally the then section describes the changes you expect due to the specified behavior.

The assertThat(…) part comes from the Assertj library which comes bundled with Spring Boot.

The spring-boot-starter-test is the primary dependency that contains the majority of elements required for our tests.

The H2 DB is our in-memory database. It eliminates the need for configuring and starting an actual database for test purposes.

@Path(“/user”)
@Api(value=”/user”, description = “Operations about user”)
@Produces({“application/json”, “application/xml”})
public class UserResource {
@GET
@Path(“/{username}”)
@ApiOperation(
value = “Get user by user name”,
response = User.class,
position = 0)
@ApiResponses(value = {
@ApiResponse(code = 400, message = “Invalid username supplied”),
@ApiResponse(code = 404, message = “User not found”)})
public Response getUserByName(
@ApiParam(
value = “The name that needs to be fetched. Use user1 for testing. “,
required = true)
@PathParam(“username”) String username)
throws ApiException {
// your method code
}

Pricipal in spring security:
—————————-
The principal is the currently logged in user. However, you retrieve it through the security context which is bound to the current thread and as such it’s also bound to the current request and its session.

SecurityContextHolder.getContext() internally obtains the current SecurityContext implementation through a ThreadLocal variable. Because a request is bound to a single thread this will get you the context of the current request.

To simplify you could say that the security context is in the session and contains user/principal and roles/authorities.

How do I retrieve a specific user?

You don’t. All APIs are designed to allow access to the user & session of the current request. Let user A be one of 100 currently authenticated users. If A issues a request against your server it will allocate one thread to process that request. If you then do SecurityContextHolder.getContext().getAuthentication() you do so in the context of this thread. By default from within that thread you don’t have access to the context of user B which is processed by a different thread.

And how do I differentiate between users that are doing requests?

You don’t have to, that’s what the Servlet container does for you.

relation mapping:
——————–

@Enumerated(EnumType.STRING)
private Level level;
public enum Level {

GOOD, AWESOME, GODLIKE

}

@ManyToMany(cascade = CascadeType.MERGE)
@JsonBackReference
@JoinTable(name = “people_parties”,
joinColumns = @JoinColumn(name = “person_id”, referencedColumnName = “person_id”),
inverseJoinColumns = @JoinColumn(name = “party_id”, referencedColumnName = “party_id”))
private Set parties = new HashSet();
@ManyToMany
@JoinTable(name = “people_parties”,
joinColumns = @JoinColumn(name = “party_id”, referencedColumnName = “party_id”),
inverseJoinColumns = @JoinColumn(name = “person_id”, referencedColumnName = “person_id”))
private Set people = new HashSet();

@Column(name = “party_date”)
@JsonFormat(pattern = “YYYY-MM-dd”)
private Date date;

@OneToMany(mappedBy = “person”, cascade = CascadeType.ALL)
private Set skills = new HashSet();
@ManyToOne
@JoinColumn (name=”person_id”)
@JsonBackReference
private Person person;

@JsonIgone ,@JsonBackreference and @JsonManagedreference:
———————————————————

28
down vote
accepted
Lets suppose we have

private class Player {
public int id;
public Info info;
}
private class Info {
public int id;
public Player parentPlayer;
}

// something like this:
Player player = new Player(1);
player.info = new Info(1, player);
Serialization
@JsonIgnore

private class Info {
public int id;
@JsonIgnore
public Player parentPlayer;
}
and @JsonManagedReference + @JsonBackReference

private class Player {
public int id;
@JsonManagedReference
public Info info;
}

private class Info {
public int id;
@JsonBackReference
public Player parentPlayer;
}
will produce same output. And output for demo case from above is: {“id”:1,”info”:{“id”:1}}

Deserialization
Here is main difference, because deserialization with @JsonIgnore will just set null to the field so in our example parentPlayer will be == null.

enter image description here

But with @JsonManagedReference + @JsonBackReference we will get Info referance there

enter image description here

are used to solve the Infinite recursion (StackOverflowError)

@JsonIgnore is not designed to solve the Infinite Recursion problem, it just ignores the annotated property from being serialized or deserialized. But if there was a two-way linkage between fields, since @JsonIgnore ignores the annotated property, you may avoid the infinite recursion.

On the other hand, @JsonManagedReference and @JsonBackReference are designed to handle this two-way linkage between fields, one for Parent role, the other for Child role, respectively:

For avoiding the problem, linkage is handled such that the property annotated with @JsonManagedReference annotation is handled normally (serialized normally, no special handling for deserialization) and the property annotated with @JsonBackReference annotation is not serialized; and during deserialization, its value is set to instance that has the “managed” (forward) link.

To recap, if you don’t need those properties in the serialization or deserialization process, you can use @JsonIgnore. Otherwise, using the @JsonManagedReference /@JsonBackReference pair is the way to go.

shareimprove this answer

http://www.baeldung.com/jackson-bidirectional-relationships-and-infinite-recursion
http://www.baeldung.com/jackson-bidirectional-relationships-and-infinite-recursion
http://www.baeldung.com/jackson-bidirectional-relationships-and-infinite-recursion

Basic authernicvation in jersey:
——————————-
Building Request AuthenticationFilter
you know that JAX-RS 2.0 has filters for pre and post request handling, so we will be using ContainerRequestFilter interface.
In this filter, we will get details of the method which request is trying to access. We will find-out all security related configuration on that method,
and verify everything here in this filter e.g. annotation like @PermitAll, @DenyAll or @RolesAllowed.

@RolesAllowed(“ADMIN”)
@GET
@Produces(MediaType.APPLICATION_JSON)

l

Ahh 1 is the convention for true. 0 is convention for false. You set load on startup for the dispatcher servlet so the spring container will be initialized on app server (tomcat etc) startup.

http://riteshkrmodi.blogspot.com/
loadOnsrtartup=2

cntrl+ f10+ n

@Path(“/hello”)
@Api(value=”hello”, description=”Sample hello world application”)
public class HelloWorldService {

@GET
@Path(“/{param}”)
@ApiOperation(value=”just to test the sample api”)
public Response getMsg(@ApiParam(value=”param”,required=true)@PathParam(“param”) String msg ) {

String output = “Hello : ” + msg;

return Response.status(200).entity(output).build();

}

@Configuration
@EnableSwagger2
public class SwaggerConfig {
@Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.select()
.apis(RequestHandlerSelectors.any())
.paths(PathSelectors.any())
.build();
}
}

And of course, for the actual exception handling logic in Spring, we’ll use the @ControllerAdvice annotation:

1
2
3
4
@ControllerAdvice
public class CustomRestExceptionHandler extends ResponseEntityExceptionHandler {

}

BindException: This exception is thrown when fatal binding errors occur.
MethodArgumentNotValidException: This exception is thrown when argument annotated with @Valid failed validation:

@ApplicationPath(“api”)
public class MyApplication extends ResourceConfig {

public MyApplication() {
// Register resources and providers.
// …

// Properties.
property(ServerProperties.SUBRESOURCE_LOCATOR_CACHE_SIZE, 1000);
property(ServerProperties.SUBRESOURCE_LOCATOR_CACHE_AGE, 60 * 10);
}
}

18292805

{
“status”: 400,
“code”: 4000,
“message”: “Provided data not sufficient for insertion”,
“link”: “http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-with-jersey-and-spring&#8221;,
“developerMessage”: “Please verify that the feed is properly generated/set”
}

Here’s the explanation for each of the properties in the response:

status – holds redundantly the HTTP error status code, so that the developer can “see” it without having to analyze the response’s header
code – this is an internal code specific to the API (should be more relevant for business exceptions)

message – short description of the error, what might have cause it and possibly a “fixing” proposal
link – points to an online resource, where more details can be found about the error
developerMessage – detailed message, containing additional data that might be relevant to the developer. This should only be available when the “debug” mode is switched on and could potentially contain stack trace information or something similar
2. Implementation

Now that I know how errors should look like, let’s write some code to make this a reality. In Jersey you have currently to possibilities to handle exceptions via

WebApplicationExceptionOR/AND
mapping exceptions to responses via ExceptionMappers
I like the second approach because it gives more control on the error entity I might send back. This is the approach I used throughout the tutorial.

JSON to YAML
Convert JSON to YAML online
YAML vs JSON
when to prefer one over the other

JSON
a subset of the JavaScript object notation syntax

data stored in name/value pairs
records separated by commas
field names & strings are wrapped by double quotes
YAML
stands for YAML ain’t markup language and is a superset of JSON – Convert YAML to JSON

.yml files begin with ‘—‘, marking the start of the document
key value pairs are separated by colon
lists begin with a hyphen

YAML vs JSON
YAML is best suited for configuration where JSON is better as a serialization format or serving up data for your APIs.

YAML is by no means a holy grail or a replacement for JSON – you should use the data format that makes the most sense for what you are trying to accomplish.

But in some cases, YAML has a couple of big advantages over JSON, including the ability to self reference, support for complex datatypes, embedded block literals, comments, and more.

Write you configuration files in YAML format where you have the opportunity – it is designed to be readable and editable by humans.

JSON, in contrast, is only designed to be human readable – intentionally lacking features to support editing. Lets start with lack of comment support – this is intentionally left out of the JSON spec because its not what the format was designed for.

A big win for YAML is that it does support comments. This is very useful especially when you use it for configuration. For data interchange, many of YAMLs features lose their appeal.

YAML parsers are younger and have been known to be less secure.

JSON vs YAML
JSON wins as a serialization format. It is more explicit and more suitable for data interchange between your apis.

JSON ships with a far simpler spec than YAML. You can learn JSON a lot faster than you can learn YAML, because it is not nearly as robust in its feature set.

YAML is a superset of JSON, which means you can parse JSON with a YAML parser.

Try mixing JSON and YAML in the same document: […, ..] for annotating arrays and { “foo” : “bar”} for objects.

jar file : spring jar *.groovy
While packaging the application spring boot includes following directories by default.
public/**, resources/**, static/**, templates/**, META-INF/** And default exclude directories are
repository/**, build/**, target/**, **/*.jar, **/*.groovy Using –include we can add directory in packaging from default exclude directories. Using –exclude, we can remove directories in packaging from default include directories. For more detail we can run the help command as follows.
spring help jar

Create a New Project using Spring Boot CLI
Using init command, Spring boot CLI can create a new project with maven as default build tool that uses service of https://start.spring.io . Suppose we want to create a web project using thymeleaf then we will run command as follows.
spring init –dependencies=web,thymeleaf my-app.zip The dependencies web,thymeleaf will configure following spring boot starter in pom.xml.
spring-boot-starter-web
spring-boot-starter-thymeleaf
spring init –build=gradle –java-version=1.8 –dependencies=web,thymeleaf –packaging=war my-app.zip

Spring boot needs @Grab annotation only to resolve third party JAR, for example spring-boot-starter-thymeleaf, freemarker etc. Spring boot automatically grabs the spring JAR as required. For example if we are using following annotations and classes then the related JAR dependencies will automatically be downloaded.
1. @Controller @RestController @EnableWebMvc : In this case Spring MVC and embedded tomcat will be downloaded.
2. @EnableWebSecurity : Spring security related JAR will be downloaded.
3. @EnableJms : JMS application related JAR will be downloaded.
4. @Test : Spring Test application related JAR will be downloaded.

spring run hello.groovy — –server.port=8484
spring run *.groovy

@SpringBootApplication
public class MyappApplication {

public static void main(String[] args) {
SpringApplication.run(MyappApplication.class, args);
}
}

package com.dineshonjava;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.test.SpringApplicationConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.web.WebAppConfiguration;

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = MyappApplication.class)
@WebAppConfiguration
public class MyappApplicationTests {

@Test
public void contextLoads() {
}

}

using gradle:
————-
build.gradle—A Gradle build specification. Had you chosen a Maven project, this would be replaced with pom.xml.
Application.java—A class with a main() method to bootstrap the application.
ApplicationTests.java— an empty JUnit test class instrumented to load a Spring application context using Spring Boot auto-configuration.
application.properties—an empty properties file for you to add configuration properties to as you see fit.

with app.groovy:
—————
No imports
No other XML configuration to define Spring MVC Components like Views,ViewResolver etc.
No web.xml and No DispatcherServlet declaration
No build scripts to create our Application war file
No need to build war file to deploy this application

It is the responsibility of Spring Boot Core Components, Groovy Compiler (groovyc) and Groovy Grape (Groovy’s JAR Dependency Manager).
Spring Boot Components uses Groovy Compiler and Groovy Grape to provide some Defaults lime adding required imports, providing required configuration, resolving jar dependencies, adding main() method etc. As a Spring Boot Developer, We don’t need to worry all these things. Spring Boot Framework will take care of all these things for us.
That’s the beauty of Spring Boot Framework.
Default import statements
To help reduce the size of your Groovy code, several import statements are automatically included. Notice how the example above refers to @Component,@RestController and @RequestMapping without needing to use fully-qualified names or import statements.
Automatic main method
Unlike the equivalent Java application, you do not need to include a public static void main(String[] args) method with your Groovy scripts. ASpringApplication is automatically created, with your compiled code acting as the source.

Top to down:
———–
Structure/procedure oriented programming languages like C programming language follows top down approach. Whereas object oriented programming languages like C++ and Java programming language follows bottom up approach.
Top down approach begins with high level design and ends with low level design or development. Whereas, bottom up approach begins with low level design or development and ends with high level design.
In top down approach, main() function is written first and all sub functions are called from main function. Then, sub functions are written based on the requirement. Whereas, in bottom up approach, code is developed for modules and then these modules are integrated with main() function.
Now-a-days, both approaches are combined together and followed in modern software design

Tomcat 7/8
Jetty 9/8

MVC features:
————
Inclusion of ContentNegotiatingViewResolver and BeanNameViewResolver beans.
Support for serving static resources, including support for WebJars (see below).
Automatic registration of Converter, GenericConverter, Formatter beans.
Support for HttpMessageConverters (see below).
Automatic registration of MessageCodesResolver (see below).
Static index.html support.
Custom Favicon support.
Automatic use of a ConfigurableWebBindingInitializer bean

Cache support:
————-
Generic
· JCache (JSR-107)
· EhCache 2.x
· Hazelcast
· Infinispan
· Couchbase
· Redis
· Caffeine
· Guava
· Simple

Jsp legacy:
———–
FreeMarker
· Groovy
· Thymeleaf
· Velocity (deprecated in 1.4)
· Mustache
If you are using any of the above template engines, spring boot will automatically pick the templates from src/main/resources/templates.

@Component vs @Controller,@Service and @Respository:
—————————————————-
From Spring Documentation:

In Spring 2.0 and later, the @Repository annotation is a marker for any class that fulfills the role or stereotype (also known as Data Access Object or DAO) of a repository. Among the uses of this marker is the automatic translation of exceptions.

Spring 2.5 introduces further stereotype annotations: @Component, @Service, and @Controller. @Component is a generic stereotype for any Spring-managed component. @Repository, @Service, and @Controller are specializations of @Component for more specific use cases, for example, in the persistence, service, and presentation layers, respectively.

Therefore, you can annotate your component classes with @Component, but by annotating them with @Repository, @Service, or @Controller instead, your classes are more properly suited for processing by tools or associating with aspects. For example, these stereotype annotations make ideal targets for pointcuts.

Thus, if you are choosing between using @Component or @Service for your service layer, @Service is clearly the better choice. Similarly, as stated above, @Repository is already supported as a marker for automatic exception translation in your persistence layer.

| Annotation | Meaning |
+————+—————————————————–+
| @Component | generic stereotype for any Spring-managed component |
| @Repository| stereotype for persistence layer |
| @Service | stereotype for service layer |
| @Controller| stereotype for presentation layer (spring-mvc) |

@Component is equivalent to

@Service, @Controller, @Repository = {@Component + some more special functionality}

That mean Service, The Controller and Repository are functionally the same.

The three annotations are used to separate “Layers” in your application,

Controllers just do stuff like dispatching, forwarding, calling service methods etc.
Service Hold business Logic, Calculations etc.
Repository are the DAOs (Data Access Objects), they access the database directly.

As many of the answers already state what these annotations are used for, we’ll here focus on some minor differences among them.

First the Similarity

First point worth highlighting again is that with respect to scan-auto-detection and dependency injection for BeanDefinition all these annotations (viz., @Component, @Service, @Repository, @Controller) are the same. We can use one in place of another and can still get our way around.

Differences between @Component, @Repository, @Controller and @Service
@Component

This is a general-purpose stereotype annotation indicating that the class is a spring component.

What’s special about @Component
only scans @Component and does not look for @Controller, @Service and @Repository in general. They are scanned because they themselves are annotated with @Component.

Just take a look at @Controller, @Service and @Repository annotation definitions:

@Component
public @interface Service {
….
}

@Component
public @interface Repository {
….
}

@Component
public @interface Controller {

}
Thus, it’s not wrong to say that @Controller, @Service and @Repository are special types of @Component annotation. picks them up and registers their following classes as beans, just as if they were annotated with @Component.

They are scanned because they themselves are annotated with @Component annotation. If we define our own custom annotation and annotate it with @Component, then it will also get scanned with

@Repository

This is to indicate that the class defines a data repository.

What’s special about @Repository?

In addition to pointing out that this is an Annotation based Configuration, @Repository’s job is to catch Platform specific exceptions and re-throw them as one of Spring’s unified unchecked exception. And for this, we’re provided with PersistenceExceptionTranslationPostProcessor, that we are required to add in our Spring’s application context like this:

This bean post processor adds an advisor to any bean that’s annotated with @Repository so that any platform-specific exceptions are caught and then rethrown as one of Spring’s unchecked data access exceptions.

@Controller

The @Controller annotation indicates that a particular class serves the role of a controller. The @Controller annotation acts as a stereotype for the annotated class, indicating its role.

What’s special about @Controller?

We cannot switch this annotation with any other like @Service or @Repository, even though they look same. The dispatcher scans the classes annotated with @Controller and detects @RequestMapping annotations within them. We can only use @RequestMapping on @Controller annotated classes.

@Service

@Services hold business logic and call method in repository layer.

What’s special about @Service?

Apart from the fact that it is used to indicate that it’s holding the business logic, there’s no noticeable speciality that this annotation provides, but who knows, spring may add some additional exceptional in future.

What else?

Similar to above, in future Spring may choose to add special functionalities for @Service, @Controller and @Repository based on their layering conventions. Hence its always a good idea to respect the convention and use them in line with layers.

n Spring @Component, @Service, @Controller, and @Repository are Stereotype annotations which is used for:

@Controller: where your request mapping from presentation page done i.e. Presentation layer won’t go to any other file it goes directly to @Controller class and check for requested path in @RequestMapping annotation which written before method calls if necessary.

@Service: All business logic is here i.e. Data related calculations and all.This annotation of business layer in which our user not directly call persistence method so it will call this methods using this annotation. It will request @Repository as per user request

@Repository:This is Persistence layer(Data Access Layer) of application which used to get data from database. i.e. all the Database related operations are done by repository.

@Component – Annotate your other components (for example REST resource classes) with component stereotype.

Indicates that an annotated class is a “component”. Such classes are considered as candidates for auto-detection when using annotation-based configuration and classpath scanning.

Other class-level annotations may be considered as identifying a component as well, typically a special kind of component: e.g. the @Repository annotation or AspectJ’s @Aspect annotation.

enter image description here

@Controller VS @RestController:
——————————-

body won’t required for REST.

@RestController
class ThisWillActuallyRun {

@RequestMapping(“/”)
String home() {
return “Hello World!”
}

}

body required for normal controller as it interacts with View.
@Controller
@EnableAutoConfiguration
public class SampleController {

@RequestMapping(“/”)
@ResponseBody
String home() {
return “Hello World!”;
}

public static void main(String[] args) throws Exception {
SpringApplication.run(SampleController.class, args);
}
}