UML

 

/*

 

public class Name{
}

 

V:\>java Name
Picked up _JAVA_OPTIONS: -Xmx512M
Error: Main method not found in class Name, please define the main method as:
public static void main(String[] args)
or a JavaFX application class must extend javafx.application.Application
*/

/*

 

not works in java 7,8,9 only works in <=6
public class Name{
static{
System.out.println(“Hello”);
}
}
*/

/*
public class Name{
public static void main(String args…){
System.out.println(args.length);
}
}

 

error

*/

/*
public class Name{
public static void main(String[] args[]){
System.out.println(args[0]+”—“+args[1]);
}
}
*/

/*
public class Name{
public static void main(String args[]){
System.out.println(args[0]+”—“+args[2]);
}
}
*/

//Error same class name with diff packages
/*import java.util.Date;
import java.sql.Date;
public class Name{
public static void main(String[] args){
System.out.println(new Date());
}
}
*/

 

/*
import java.sql.Date;
public class Name{
public static void main(String[] args){
Long millis= System.currentTimeMillis();
System.out.println(new Date(millis));
System.out.println(new java.util.Date());
}
}
*/

/*
import java.sql.Date;
//protected and private not allowed here
protected class Name{
public static void main(String[] args){
Long millis= System.currentTimeMillis();
System.out.println(new Date(millis));
System.out.println(new java.util.Date());
}
}
*/

/*
public class Name{
int a =100;
public static void main(String[] args){
Name name=new Name();
name.m1();
}
public void m1(){
System.out.println(a+”——“+b);
}
int b=200;
}
*/

/*
public class Name{
public static void main(String[] args){
A a =new A();
}
}

 

class A{

}
*/

 

/*
public class Name{
//public static void main(String[] args){
//static public void main(String[] args){
//static public void main(String args[]){
//static public void main(String… args){
static public void main(String args…){
System.out.println(“Hello——-“);
}
}
*/

 

/*
public class Name{
public static void main(String[] args){
A a =new A();
//accepted if it is not private
a.name =”OkOk”;
System.out.println(a.name);
}
}

class A{
String name;
}
*/

/*
public class Name{
String name;
public String getString(){
return name;
}
public void setString(String name){
name = name; //null
this.name = name;
}
public void display(){
System.out.println(name);
}
public static void main(String[] args){
Name name=new Name();
name.setString(“hello”);
name.display();
}
}
*/

 

/*
public class Name{
public static void main(String… args){
//single line
// //still single
// /* hey single,yes */
// Whatever you place after // then it is sinle-line comment.

 

/*
*
*This is multi-line comment
*
*/

/*
*
* // This is also multi-line comment
*
*/
/*
System.out.println(“heyyy”);
}
}

*/

/*
public class Name{
//final variables should get intialized.
private final Hello obj=null;
private static final String name;
private String yes;
//public Name(Hello obj, String name){
//this.obj=obj;
//this.name=name;
//}
public static void main(String[] args){
System.out.println(Name.name);
// Name name=new Name(null,”kkk”);
System.out.println(name);
}
}

class Hello{

}
*/

/*
public class Name extends Thread{
public void run(){
System.out.println(“Heey”);
throw new RuntimeException(“Hello”);
}
public static void main(String[] args){
Thread t1 = new Thread(new Name());
t1.start();
System.out.println(“last”);
}
}
*/

 

public class Name {
public void go(){
Runnable r = new Runnable(){
public void run(){
System.out.println(“run Heeh”);
}
}; Thread t =new Thread(r); t.start();
}
public static void main(String[] args){
Name t1 = new Name();
t1.go();
}
}

ArrayList-10ArrayList-10Vector-10HashSet-16HashMap-16HashTable-11HashSet-16
Explanation:
ArrayList:
Constructs an empty list with an initial capacity of 10.
Vector:
Constructs an empty vector so that its internal data array has size 10 and its standard capacity increment is zero.
HashMap:
Constructs an empty HashMap with the default initial capacity (16) and the default load factor (0.75).
Hashtable:
Constructs a new, empty hashtable with a default initial capacity (11) and load factor (0.75).
Hashset:
Constructs a new, empty set; the backing HashMap instance has default initial capacity (16) and load factor (0.75).

High Availability in HADOOP 2.0
Why we need:Prior to Hadoop 2.0.0, the NameNode was a single point of failure (SPOF) in an HDFS cluster. Each cluster had a single NameNode, and if that machine or process became unavailable, the cluster as a whole would be unavailable until the NameNode was either restarted or brought up on a separate machine.
solutions:Using the Quorum Journal Manager or Conventional Shared StorageHDFS High Availability (HA) feature  using NFS for the shared storage required by the NameNodes.  HA Through NFS: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html  NOTES: —–  Like HDFS Federation, HA clusters reuse the nameservice ID to identify a single HDFS instance that may in fact consist  of multiple HA NameNodes. In addition, a new abstraction called NameNode ID is added with HA. Each distinct NameNode  in the cluster has a different NameNode ID to distinguish it. To support a single configuration file for all of the NameNodes,  the relevant configuration parameters are suffixed with the nameservice ID as well as the NameNode ID.  “split-brain scenario,” the administrator must configure at least one fencing method for the shared storage. During a failover, if it cannot be verified that the previous Active node has relinquished its Active state, the fencing process is responsible for cutting off the previous Active’s access to the shared edits storage. This prevents it from making any further edits to the namespace, allowing the new Active to safely proceed with failover.
Note that, in an HA cluster, the Standby NameNode also performs checkpoints of the namespace state, and thus it is not necessary to run a Secondary NameNode, CheckpointNode, or BackupNode in an HA cluster. In fact, to do so would be an error. This also allows one who is reconfiguring a non-HA-enabled HDFS cluster to be HA-enabled to reuse the hardware which they had previously dedicated to the Secondary NameNode.

 

HA through QRM:

JournalNode machines – the machines on which you run the JournalNodes. The JournalNode daemon is relatively lightweight, so these daemons may reasonably be collocated on machines with other Hadoop daemons, for example NameNodes, the JobTracker, or the YARN ResourceManager. Note: There must be at least 3 JournalNode daemons, since edit log modifications must be written  to a majority of JNs. This will allow the system to tolerate the failure of a single machine. You may also run more than 3 JournalNodes, but in order to actually increase the number of failures the system can tolerate, you should run an odd number of JNs, (i.e. 3, 5, 7, etc.).  Note that when running with N JournalNodes, the system can tolerate at most (N – 1) / 2 failures and continue to function normally. Which is better:

QJM is obviously better than NFS.
From Apache documentation page:
In order for the Standby node to keep its state synchronized with the Active node, the current implementation requires that the two nodes both have access to a directory on a shared storage device (eg an NFS mount from a NAS). This restriction will likely be relaxed in future.If NFS mount is down or had some issues, then High availability can’t be achieved.
In QJM, the edits are written to multiple Journal Nodes and probability of failure is less compared to NFS option.
Related SE question:

 

Collection
legacy  – Dictionary,Vector,Properties,Stack

Interfaces – Collection ==It facilitates you to work with group of objects.It is peak element in hierarchy.List == extends Collection(I),Ordered collection of elementsSet == extends Collection(I), unique valuesSortedSet == extends Set(I), hadle sorted sets.Map == Unique keys with valuesMap.Entry == Inner class of the Map,describes an elemnt in MapSortedMap == extends Map, Keys are maintained in sorted order.Enumeration == The legacy Interface by it methods you can iterate one object at a time  from a group of objects.               This is suppressed by Iterator<I>        Classes:AbstractCollection(C): Implements most of collection classes.All Implemented Interfaces:Iterable<E>, Collection<E>
methods:——–
boolean add(E e)Ensures that this collection contains the specified element (optional operation).boolean addAll(Collection<? extends E> c)Adds all of the elements in the specified collection to this collection (optional operation).void clear()Removes all of the elements from this collection (optional operation).boolean contains(Object o)Returns true if this collection contains the specified element.boolean containsAll(Collection<?> c)Returns true if this collection contains all of the elements in the specified collection.boolean isEmpty()Returns true if this collection contains no elements.abstract Iterator<E> iterator()Returns an iterator over the elements contained in this collection.boolean remove(Object o)Removes a single instance of the specified element from this collection, if it is present (optional operation).boolean removeAll(Collection<?> c)Removes all of this collection’s elements that are also contained in the specified collection (optional operation).boolean retainAll(Collection<?> c)Retains only the elements in this collection that are contained in the specified collection (optional operation).abstract int size()Returns the number of elements in this collection.Object[] toArray()Returns an array containing all of the elements in this collection.<T> T[] toArray(T[] a)Returns an array containing all of the elements in this collection; the runtime type of the returned array is that of the specified array.String toString()Returns a string representation of this collection.

AbstractList == extends AbstractCollection(C) and implements all List<I>AbstractSequentialList == extends AbstractList uses sequential access instead of random access.LinkedList == gives a linked list by extending AbstractSequentialListArrayList  === gives a  dynamic array by extending AbstractListAbstractSet == extends AbstractCollection(C) and implements all Set<I>HashSet == extends AbstractSet(C) and uses HashTble internally.LinkedHashSet == extends HashSet and allow insertion-order iterationsTreeSet ==extends AbstractSet and implements a Set sorted in a tree.AbstractMap==Implements Map(I)HashMap ==extends AbstractMap and uses HashTableTreeMap ==extends AbstractMap and uses TreeLinkedHashMap ==extends HashMap to allow insertion-orderWeakHashMap ==extends AbstractMap with hash table with weak keysIdentityHashMAp ==extends AbstractMap and uses reference equality
Legacy Classes:—————
Vector == Implements a dynamic array functionality and ll’er to ArrayList except it is thread safe.
No you can’t change the size of an array once created. You either have to allocate it bigger than you think you’ll need or accept the overhead of having to reallocate it needs to grow in size. When it does you’ll have to allocate a new one and copy the data from the old to the new:
int oldItems[] = new int[10];for (int i=0; i<10; i++) {  oldItems[i] = i+10;}int newItems[] = new int[20];System.arraycopy(oldItems, 0, newItems, 0, 10);oldItems = newItems;If you find yourself in this situation, I’d highly recommend using the Java Collections instead. In particular ArrayList essentially wraps an array and takes care of the logic for growing the array as required:
List<xClass> mysclass = new ArrayList<xClass>();myclass.add(new xClass());myclass.add(new xClass());Generally an ArrayList is a preferable solution to an array anyway for several reasons. For one thing, arrays are mutable. If you have a class that does this:
class Myclass {  private int items[];  public int[] getItems() { return items; }}you’ve created a problem as a caller can change your private data member, which leads to all sorts of defensive copying. Compare this to the List version:
class Myclass {  private List<Integer> items;
public List<Integer> getItems() { return Collections.unmodifiableList(items); }}

Stack == Stack is a sub class of Vector and it follows Last IN FIRST OUT (Bangles)Dictionary == An Abstract Class which implemnts similar functionality like Map.HashTable == Concrete Implementation of a Dictionary. Properties == A sub-class of HashTable and it uses key as String and Value as String.BitSet == Special type of array that stores bits ,Size can be ++ dynmaically

 

Arrays are mutable or not?—————————The Strings contained in the String[] are indeed immutable, but the array is mutable.
This is well explained in this answer:
Immutability means that objects of a certain type can not change in any meaningful way to outside observersInteger, String, etc are immutableGenerally all value types should beArray objects are mutableIt may be an array of references to immutable types, but the array itself is mutableMeaning you can set those references to anything you wantAlso true for array of primitivesAn immutable array will not be practicalReferences to objects can be sharedIf the object is mutable, mutation will be seen through all these references

Why ArrayList toArray() returns OBject[]—————————————–Because array has been in Java since the beginning, while genericswere only introduced in Java 5. And the List.toArray() method was introduced in Java 1.2, before generics existed, so it wasspecified to return Object[].
Set only have the methods which are in Collection(I):——————————————————The Set interface contains only methods inherited from Collection and adds the restriction that duplicate elements are prohibited.
Set also adds a stronger contract on the behavior of the equals and hashCode operations, allowing Set instances to be compared meaningfully even if their implementation types differ.
setobj.subSet(int a,int b)listObj.subList(int a, int b)==> sub functions always a is included and b is excluded.
Lexical Order of strings can be done using TreeSet::—————————————————-Set<String> strings=new TreeSet<>(); strings.add(“Mysore”); strings.add(“Mangalore”); strings.add(“Hyderabad”); strings.add(“Pune”); strings.add(“Bangalore”); strings.add(“Gurugraam”); System.out.println(strings); NavigableSet in java:———————floor(8) if 8 present then returns 8 else nullNavigableSet in Java with ExamplesNavigableSet represents a navigable set in Java Collection Framework. The NavigableSet interface inherits from the SortedSet interface. It behaves like a SortedSet with the exception that we have navigation methods available in addition to the sorting mechanisms of the SortedSet. For example, NavigableSet interface can navigate the set in reverse order compared to the order defined in SortedSet.
The classes that implement this interface are, TreeSet and ConcurrentSkipListSet
Methods of NavigableSet (Not in SortedSet):
Lower(E e) : Returns the greatest element in this set which is less than the given element or NULL if there is no such element.Floor(E e ) : Returns the greatest element in this set which is less than or equal to given element or NULL if there is no such element.Ceiling(E e) : Returns the least element in this set which is greater than or equal to given element or NULL if there is no such element.Higher(E e) : Returns the least element in this set which is greater than the given element or NULL if there is no such element.pollFirst() : Retrieve and remove the first least element. Or return null if there is no such element.pollLast() : Retrieve and remove the last highest element. Or return null if there is no such element.// A Java program to demonstrate working of SortedSetimport java.util.NavigableSet;import java.util.TreeSet; public class hashset {    public static void main(String[] args)     {        NavigableSet<Integer> ns = new TreeSet<>();        ns.add(0);        ns.add(1);        ns.add(2);        ns.add(3);        ns.add(4);        ns.add(5);        ns.add(6);         // Get a reverse view of the navigable set        NavigableSet<Integer> reverseNs = ns.descendingSet();         // Print the normal and reverse views        System.out.println(“Normal order: ” + ns);        System.out.println(“Reverse order: ” + reverseNs);         NavigableSet<Integer> threeOrMore = ns.tailSet(3, true);        System.out.println(“3 or  more:  ” + threeOrMore);        System.out.println(“lower(3): ” + ns.lower(3));        System.out.println(“floor(3): ” + ns.floor(3));        System.out.println(“higher(3): ” + ns.higher(3));        System.out.println(“ceiling(3): ” + ns.ceiling(3));         System.out.println(“pollFirst(): ” + ns.pollFirst());        System.out.println(“Navigable Set:  ” + ns);         System.out.println(“pollLast(): ” + ns.pollLast());        System.out.println(“Navigable Set:  ” + ns);         System.out.println(“pollFirst(): ” + ns.pollFirst());        System.out.println(“Navigable Set:  ” + ns);         System.out.println(“pollFirst(): ” + ns.pollFirst());        System.out.println(“Navigable Set:  ” + ns);         System.out.println(“pollFirst(): ” + ns.pollFirst());        System.out.println(“Navigable Set:  ” + ns);         System.out.println(“pollFirst(): ” + ns.pollFirst());        System.out.println(“pollLast(): ” + ns.pollLast());    }}Run on IDEOutput:
Normal order: [0, 1, 2, 3, 4, 5, 6]Reverse order: [6, 5, 4, 3, 2, 1, 0]3 or  more:  [3, 4, 5, 6]lower(3): 2floor(3): 3higher(3): 4ceiling(3): 3pollFirst(): 0Navigable Set:  [1, 2, 3, 4, 5, 6]pollLast(): 6

java-class-that-implements-map-and-keeps-insertion-order:———————————————————-
I suggest a LinkedHashMap or a TreeMap. A LinkedHashMap keeps the keys in the order they were inserted, while a TreeMap is kept sorted via a Comparator or the natural Comparable ordering of the elements.
Since it doesn’t have to keep the elements sorted, LinkedHashMap should be faster for most cases; TreeMap has O(log n) performance for containsKey, get, put, and remove, according to the Javadocs, while LinkedHashMap is O(1) for each.
If your API that only expects a predictable sort order, as opposed to a specific sort order, consider using the interfaces these two classes implement, NavigableMap or SortedMap. This will allow you not to leak specific implementations into your API and switch to either of those specific classes or a completely different implementation at will afterwards.
HashTable vs HashMap vs SortedMap and LinkedHashMap———————————————————-All three classes implement the Map interface and offer mostly the same functionality. The most important difference is the order in which iteration through the entries will happen:
HashMap makes absolutely no guarantees about the iteration order. It can (and will) even change completely when new elements are added.TreeMap will iterate according to the “natural ordering” of the keys according to their compareTo() method (or an externally supplied Comparator). Additionally, it implements the SortedMap interface, which contains methods that depend on this sort order.LinkedHashMap will iterate in the order in which the entries were put into the map”Hashtable” is the generic name for hash-based maps. In the context of the Java API, Hashtable is an obsolete class from the days of Java 1.1 before the collections framework existed. It should not be used anymore, because its API is cluttered with obsolete methods that duplicate functionality, and its methods are synchronized (which can decrease performance and is generally useless). Use ConcurrrentHashMap instead of Hashtable.
LinkedHashMap():—————-This class extends HashMap and maintains a linked list of the entries in the map, in the order in which they were inserted. This allows insertion-order iteration over the map. That is, when iterating a LinkedHashMap, the elements will be returned in the order in which they were inserted.
You can also create a LinkedHashMap that returns its elements in the order in which they were last accessed.
Following is the list of constructors supported by the LinkedHashMap class.
HashMap get() complexity:————————-
115down voteacceptedIt depends on many things. It’s usually O(1), with a decent hash which itself is constant time… but you could have a hash which takes a long time to compute, and if there are multiple items in the hash map which return the same hash code, get will have to iterate over them calling equals on each of them to find a match.
In the worst case, a HashMap has an O(n) lookup due to walking through all entries in the same hash bucket (e.g. if they all have the same hash code). Fortunately, that worst case scenario doesn’t come up very often in real life, in my experience. So no, O(1) certainly isn’t guaranteed – but it’s usually what you should assume when considering which algorithms and data structures to use.
In JDK 8, HashMap has been tweaked so that if keys can be compared for ordering, then any densely-populated bucket is implemented as a tree, so that even if there are lots of entries with the same hash code, the complexity is O(log n). That can cause issues if you have a key type where equality and ordering are different, of course.
And yes, if you don’t have enough memory for the hash map, you’ll be in trouble… but that’s going to be true whatever data structure you use.
HashSet and HashMap loading factor:————————————There is indeed logical reasoning behind the choice. If we understand HashSet is backed by a HashMap and recognize the constructor in your post calls a HashMap constructor:
public HashSet(int initialCapacity, float loadFactor) {    map = new HashMap<>(initialCapacity, loadFactor);}and then proceed to related HashMap documentation we can see the logical reasoning behind the important choice.
As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.
HashMap :———
Java HashMap
HashMap is HashTable based implementation of Map. This is the reason why interviewer always ask for difference between HashMap and HashTable. HashMap is mostly equals to HashTable except below two differences.
HashMap is unsynchronized while HashTable is synchronised.HashMap permits null while HashTable doesn’t.Important Property of HashMap
DEFAULT_INITIAL_CAPACITY Default Initial Capacity(2 power n). Number of element HashMap can  contain.MAXIMUM_CAPACITY Maximum Capacity of HashMap (2 power n).LOADFACTOR Defines threshold of HashMap. When re-sizing will occur of HashMap.DEFAULT_LOAD_FACTOR  Will be used when no load factor is defined in constructor of HashMap.SIZE Number of key-value pair mapping, HashMap contains.Creation of HashMap
When there is no parameter defined while creating HashMap default Initial Capacity(16) and Default load factor(0.75) will be used. This HashMap can contain up to 16 element and resizing of HashMap will occur when 13th element will be inserted. This is because load factor is 75%(.75) and this threshold will be crossed when you add 13th element(12+1).
You can also provide initial capacity and loadFactor. But initial capacity can not be more than maximum capacity (2 power 30) and load factor can not be zero or negative number.
Addition of element in HashMap
In order to add any element you need to provide 2 thing, key and value.
Key : key with which specified value will be associated. null is allowed.
Value : value to be associated with specified key.
First HashMap will generate hashcode for given key and then check if there is any value already associated with given key or not. If yes then it will return already associated value. Else it will add value in HashMap in with provided key.
Bullet Point
HashMap doesn’t give any Guarantee in order of elements in Map(Means Order can change over time).HashMap provide Constant time performance for get & set operation(If proper Hashing algorithm is used).Time requires to Iterate collection is proportional to “Capacity“(Elements it can hold) & Size(Elements it is holding currently) of HashMap.In case iteration performance is more important then it is advisable to not set initial capacity too high and load factor too low. As performance is directly proportional to Initial Capacity and load Factor.capacity is the number of buckets in the hash table.initial capacity(Default Value is 16) is simply the capacity at the time the hash table is created.The load factor(Default value .75) is a measure of how full the hash table is allowed to get before its capacity is automatically increased.When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) .Use “Collections.synchronizedMap()” method to make Map synchronised.Iterators returned by HashMap class is “fail-fast“.HashMap is backed by an Array(Key) and LinkedList(Value).HashMap uses hashcode(Using Key) to identify exact location where object should be placed or retrieved in HashMap.In the end HashCode return the exact location(Index) in backing array.Backed Array have a fixed size. So whenever Array is full(Number of keys in this map reaches its threshold). A new Array with new capacity will be created and all element will be added to this new Array.HashCode will be used in both cases(Adding  and Retrieving Object) while equals() method may or may not be used in any case.Best candidate for Key in HashMap would be an Immutable Class with properly implement Equals and Hashcode method(Example: String Class).The better hashcode and equals method implementation is better performance of HashMap would be.In such way String and Wrapper classes of all Primitives will be great candidate for keys in HashMap.What is ReHashing
Every HashMap has predefined size (Initial Capacity) and a logic to increment this size(Load Factor) whenever required(When threshold limit crossed).
Example :
Create HashMap with below configuration
Initial Capacity = 16 (Default Initial Capacity)
Load Factor : .75 (Default load factor)
Moment you add 13th element in given HashMap, Threshold limit is crossed for given HashMap and system will create a new backing keyset array(Size of this array will be double of previous array). System will have to again calculate exact bucket where elements from previous bucket should be placed and all elements from old HashMap will be copied to new HashMap. This whole process is called ReHashing because Hashcode is calculated for each element again.
Because overtime HashMap might be reHashed and order could get change.
TreeSet(C):———–TreeSet provides an implementation of the Set interface that uses a tree for storage. Objects are stored in a sorted and ascending order.
Access and retrieval times are quite fast, which makes TreeSet an excellent choice when storing large amounts of sorted information that must be found quickly.
Following is the list of the constructors supported by the TreeSet class.
To implement your own sorting functionality with TreeSet on user defined objects, you have to pass Comparator object along with TreeSet constructor call. The Comparator implementation holds the sorting logic. You have to override compare() method to provide the sorting logic on user defined objects. Below example shows how to sort TreeSet using comparator with user defined objects.
WeakHashMap:————WeakHashMap is the implementation of the Map<I> and is similar to Hashmap with minimal difference i.e
WeakHashMap is an implementation of the Map interface that stores only weak references to its keys. Storing only weak references allows a key-value pair to be garbage-collected when its key is no longer referenced outside of the WeakHashMap.
This class provides the easiest way to harness the power of weak references. It is useful for implementing “registry-like” data structures, where the utility of an entry vanishes when its key is no longer reachable by any thread.
The WeakHashMap functions identically to the HashMap with one very important exception: if the Java memory manager no longer has a strong reference to the object specified as a key, then the entry in the map will be removed.
Weak Reference − If the only references to an object are weak references, the garbage collector can reclaim the object’s memory at any time.it doesn’t have to wait until the system runs out of memory. Usually, it will be freed the next time the garbage collector runs.
Following is the list of constructors supported by the WeakHashMap class.

WeakHashmap -Garbage Collection:——————————–
up vote15down voteacceptedHowever, I don’t understand what it means by it garbage-collects a mapping when it is no longer in ordinary use.OK. Under normal circumstances, when the garbage collector runs it will remove objects that your program can no longer use. The technical term is an “unreachable object”, and it means that the program execution has no way to get hold of a reference to the object any more. An object that is unreachable, may be collected in the next GC cycle … or not. Either way, it is no longer the application’s concern.
In this case, the WeakHashMap uses a special class called WeakReference to refer to the keys. A weak reference is an object that acts sort of like an indirect pointer (a pointer to an object holding a pointer). It has the interesting property that the garbage collector is allowed to break the reference; i.e. replace the reference it contains with null. And the rule is that a weak reference to an object will be broken when the GC notices that the object is no longer reachable via a chain of normal (strong) or soft references1.
The phrase “no longer in ordinary use” really means that the key object is no longer strongly or softly reachable; i.e. via a chain of strong and / or soft references.
How does the data structure know I will no longer use a key in my program?The WeakHashmap doesn’t do it. Rather, it is the GC that notices that the key is not strongly reachable.
As part of its normal traversal, the GC will find and mark all strongly reachable objects. Then it goes through all of the WeakReference objects and checks to see if the objects they refer to have been marked, and breaks them if they have not. (Or something like that … I’ve never looked at the actual GC implementation. And it is complicated by the fact that it has to deal with SoftReference and PhantomReference objects as well.)
The only involvement that WeakHashmap has is that:
it created and uses WeakReference objects for the keys, andit expunges hash table entries whose key WeakReferences have been cleared by the GC.What if I don’t refer to a key for a long time?The criterion for deciding that a weak reference should be broken is not time based.
But it is possible that timing influences whether not a key is removed. For instance, a key could 1) cease to be strongly reference, 2) be retrieved from the map, and 3) be assigned to a reachable variable making it strongly referenced once again. If the GC doesn’t run during the window in which the key is not strongly reachable, the key and its associated value will stay in the map. (Which is what you’d want to happen …)
1 – Soft refere

Key is weak-referenced but value is Strong referenced in weakhashmap:———————————————————————The first sentence of WeakHashMap’s javadoc says:
Hash table based implementation of the Map interface, with weak keys. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. More precisely, the presence of a mapping for a given key will not prevent the key from being discarded by the garbage collector, that is, made finalizable, finalized, and then reclaimed. When a key has been discarded its entry is effectively removed from the map, so this class behaves somewhat differently from other Map implementations.and somewhat further down, it writes:
The value objects in a WeakHashMap are held by ordinary strong references. Thus care should be taken to ensure that value objects do not strongly refer to their own keys, either directly or indirectly, since that will prevent the keys from being discarded.That is, only the keys are weakly referenced, but the values are strongly referenced. In your code, each key is also used a value, therefore strongly referenced, and therefore not garbage collected.
IdentityHashMap:—————-
This class implements AbstractMap. It is similar to HashMap except that it uses reference equality when comparing the elements.
This class is not a general-purpose Map implementation. While this class implements the Map interface, it intentionally violates Map’s general contract, which mandates the use of the equals method when comparing objects.
This class is designed for use only in rare cases wherein reference-equality semantics are required. This class provides constant-time performance for the basic operations (get and put), assuming the system identity hash function (System.identityHashCode(Object)) disperses elements properly among the buckets.
This class has one tuning parameter (which affects performance but not semantics): expected maximum size. This parameter is the maximum number of key-value mappings that the map is expected to hold.
Following is the list of the constructors supported by the IdentityHashMap.

 

IdentityHashMap in Java was added in Java 1.4 but still it’s one of those lesser known class in Java. The main difference between IdentityHashMap and HashMap in Java is that IdentityHashMap is a special implementation of Map interface which doesn’t use equals() and hashCode() method for comparing object unlike other implementation of Map e.g. HashMap. Instead, IdentityHashMap uses equality operator “==”  to compare keys and values in Java which makes it faster compare to HashMap and suitable where you need reference equality check and instead of logical equality. By the way, IdentityHashMap is a special implementation of Map interface much like EnumMap but it also violates general contract of Map interface which mandates using equals method for comparing Object. Also, IdentityHashMap vs HashMap is a good Java question and have been asked a couple of times.
Even though this question is not as popular as How HashMap works in Java or Difference between Hashtable and HashMap, it’s still a good question to ask. In this Java tutorial, we will see an example of IdentityHashMap and explores some key differences between IdentityHashMap and HashMap in Java.
Difference between IdentityHashMap and HashMap
Difference between IdentityHashmap vs HashMap in Java Interview questionThough both HashMap and IdentityHashMap implements Map interface, have fail-fast Iterator and non-synchronized collections, following are some key differences between HashMap and IdentityHashMap in Java.
1) The main difference between HashMap vs IdentityHashMap is that IdentityHashMap uses equality operator “==” for comparing keys and values inside Map while HashMap uses equals method for comparing keys and values.
2) Unlike HashMap, who uses hashcode to find bucket location, IdentityHashMap also doesn’t use hashCode() instead it uses System.identityHashCode(object).
3) Another key difference between IdentityHashMap and HashMap in Java is Speed. Since IdentityHashMap doesn’t use equals() its comparatively faster than HashMap for object with expensive equals() and hashCode().
4) One more difference between HashMap and IdentityHashMap is Immutability of the key. One of the basic requirement to safely store Objects in HashMap is keys needs to be immutable, IdentityHashMap doesn’t require keys to be immutable as it is not relied on equals and hashCode.
There is also a class called IdentityHashtable which is analogous to Hashtable in Java but it’s not part of standard JDK and available in com.sun… package.
Example of IdentityHashMap in JavaHere is an example of IdentityHashMap in Java which shows the key difference between HashMap and IdentityHashMap in comparing Objects.  IdentityHashMap uses equality operator for comparison instead of equals method in Java :
import java.util.IdentityHashMap;
/** * Java program to show difference between HashMap and IdentityHashMap in Java * @author Javin Paul */public abstract class Testing {
public static void main(String args[]) {        IdentityHashMap<String, String> identityMap = new IdentityHashMap<String, String>();              identityMap.put(“sony”, “bravia”);        identityMap.put(new String(“sony”), “mobile”);             //size of identityMap should be 2 here because two strings are different objects        System.out.println(“Size of IdentityHashMap: ” + identityMap.size());        System.out.println(“IdentityHashMap: ” + identityMap);              identityMap.put(“sony”, “videogame”);               //size of identityMap still should be 2 because “sony” and “sony” is same object        System.out.println(“Size of IdentityHashMap: ” + identityMap.size());        System.out.println(“IdentityHashMap: ” + identityMap);       }}
OutputSize of IdentityHashMap: 2IdentityHashMap: {sony=bravia, sony=mobile}Size of IdentityHashMap: 2IdentityHashMap: {sony=videogame, sony=mobile}

That’s all on the difference between IdentityHashMap and HashMap in Java.  As I said IdentityHashMap violates Map interface general contract and should only be used when reference equality makes sense. As per Javadoc, IdentityHashMap is suitable to keep object reference during Serialization and deep copy and can also be used  to maintain as a proxy object.

Read more: http://javarevisited.blogspot.com/2013/01/difference-between-identityhashmap-and-hashmap-jav
—————————————
Vector:——-Vector implements a dynamic array. It is similar to ArrayList, but with two differences −
Vector is synchronized.
Vector contains many legacy methods that are not part of the collections framework.
Vector proves to be very useful if you don’t know the size of the array in advance or you just need one that can change sizes over the lifetime of a program.
Following is the list of constructors provided by the vector.

 

 

 

 

 

 

 

https://beginnersbook.com/2013/05/aggregation/

http://javarevisited.blogspot.com/2014/02/ifference-between-association-vs-composition-vs-aggregation.html

 

http://idiotechie.com/uml2-class-diagram-in-java/

http://idiotechie.com/uml2-class-diagram-in-java/

Advertisements