Hash表又被称为散列表,是根据关键码值(key-value)也就是键值对来直接访问的一种数据结构。也就是说,它通过把关键码值映射到表中的一个位置来访问记录,用以加快查找的速度。
来自于java.util
Hashtable实现了一个哈希表(Map
public class Hashtable<K,V>
extends Dictionary<K,V>
implements Map<K,V>, Cloneable, java.io.Serializable {}
那么key必须实现hashcode方法和equals方法。
初始容量(initial capacity)
负载因子(load factor)
The capacity is the number of buckets in the hash table
在哈希表中容量就是桶(buckets)的数量。
初始容量(initial capacity)就是创建hashtable表时的容量
Generally, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the time cost to look up an entry (which is reflected in most Hashtable operations, including get and put).
通常,0.75负载因子提供了良好的时间和空间的平衡。提高负载因子的值会降低空间消耗,但是增加了时间成本去查找entry。
图中key2和key3就产生了hash冲突,他们的地址在同一个桶上,造成一个桶上存储了2个条目。这样就知道了,如果是多个冲突的话,一个桶就有多个条目。这种情况的查找,必须按顺序进行搜索。
/*
The iterators returned by the iterator method of the collections returned by all of this class's "collection view methods" are fail-fast: if the Hashtable is structurally modified at any time after the iterator is created, in any way except through the iterator's own remove method, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future. The Enumerations returned by Hashtable's keys and elements methods are not fail-fast.
Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.
*/
由此类的所有“集合视图方法”返回的集合的迭代器方法返回的迭代器是快速失败的:如果在创建迭代器后的任何时间对 Hashtable 进行结构修改,除了通过迭代器自己的删除之外的任何方式方法,迭代器将抛出ConcurrentModificationException 。因此,面对并发修改,迭代器快速而干净地失败,而不是在未来不确定的时间冒任意的、非确定性的行为。 Hashtable 的键和元素方法返回的枚举不是快速失败的。
请注意,不能保证迭代器的快速失败行为,因为一般来说,在存在不同步的并发修改的情况下,不可能做出任何硬保证。快速失败的迭代器会尽最大努力抛出ConcurrentModificationException 。因此,编写一个依赖于这个异常的正确性的程序是错误的:迭代器的快速失败行为应该只用于检测错误。
说人话:上面的这一段是描述了hashtable的迭代器iterator在遍历一个集合的对象的时候,如果遍历的过程中对集合对象的内容进行了修改(包括增加、删除、 修改),就会抛出一个异常ConcurrentModificationException
。翻译:并发修改异常。
As of the Java 2 platform v1.2, this class was retrofitted to implement the Map interface, making it a member of the Java Collections Framework. Unlike the new collection implementations, Hashtable is synchronized. If a thread-safe implementation is not needed, it is recommended to use HashMap in place of Hashtable. If a thread-safe highly-concurrent implementation is desired, then it is recommended to use java.util.concurrent.ConcurrentHashMap in place of Hashtable.
从 Java 2 平台 v1.2 开始,该类被改进为实现Map接口,使其成为Java Collections Framework的成员。与新的集合实现不同, Hashtable是同步的。如果不需要线程安全的实现,建议使用HashMap代替Hashtable 。如果需要线程安全的高并发实现,则建议使用java.util.concurrent.ConcurrentHashMap代替Hashtable 。
private transient Entry<?,?>[] table; //定义的一个表数据结构,该结构在该类的源码中有单独定义,由2.4图也可以看出其实就是存储k-v键值的结构
private transient int count; //表中的条目总数
private int threshold; //表的阈值,是哈希表的扩容的临界条件。(该字段的值为 (int)(capacity * loadFactor)
private float loadFactor; //表的负载因子
private transient int modCount = 0; //修改的次数
public Hashtable(int initialCapacity, float loadFactor) {
if (initialCapacity < 0) //初始容量为0的话就抛出异常
throw new IllegalArgumentException("Illegal Capacity: "+initialCapacity);
if (loadFactor <= 0 || Float.isNaN(loadFactor)) //负载因子小于等于0或者传入的负载因子不是一个数字就抛出异常
throw new IllegalArgumentException("Illegal Load: "+loadFactor);
if (initialCapacity==0) //如果传入的为0,则将传入的做处理,使其等于1
initialCapacity = 1;
this.loadFactor = loadFactor; //负载因子复合条件了,直接赋值。
table = new Entry<?,?>[initialCapacity]; //创建entry表数据结构,大小是初始容量的大小
threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1); //阈值:自己看,这个很容易懂
}
指定初始容量和负载因子对其传入的值进行了一定的约束,不满足条件则直接抛出异常。
public Hashtable(int initialCapacity) {
this(initialCapacity, 0.75f);
}
hashtable的初始容量的默认值为11,负载因子为0.75f
public Hashtable() {
this(11, 0.75f);
}
public Hashtable(Map<? extends K, ? extends V> t) {
this(Math.max(2*t.size(), 11), 0.75f); //保证初始容量,使用的是默认加载因子
putAll(t);
}
//加入了synchronized锁来保证在多线程环境下的数据安全
public synchronized V put(K key, V value) {
// 确保value不为空
if (value == null) {
throw new NullPointerException();
}
// 确保key没有存在hashtable中
Entry<?,?> tab[] = table;
int hash = key.hashCode(); //获取key的哈希值,与hashmap有所不同。
int index = (hash & 0x7FFFFFFF) % tab.length; //直接取模得到目标哈希桶。
@SuppressWarnings("unchecked")
Entry<K,V> entry = (Entry<K,V>)tab[index]; //单向链表
for(; entry != null ; entry = entry.next) { //for循环查找复合条件的key,赋值为新的value,也就是覆盖掉原来的值
if ((entry.hash == hash) && entry.key.equals(key)) {
V old = entry.value;
entry.value = value;
return old;
}
}
//执行到这里就表示没有key相等的位置,那么就直接插入entry中
addEntry(hash, key, value, index);
return null;
}
//------------------------------------addEntry----------------------------------------
private void addEntry(int hash, K key, V value, int index) {
modCount++; //修改的次数进行+1
Entry<?,?> tab[] = table;
if (count >= threshold) { //如果HashTable的条目大小大于阈值,那么就会触发一次rehash()
// 超过阈值,则进行扩容
rehash();
tab = table;
hash = key.hashCode();
index = (hash & 0x7FFFFFFF) % tab.length;
}
// Creates the new entry.
@SuppressWarnings("unchecked")
Entry<K,V> e = (Entry<K,V>) tab[index]; //将tab中索引位置处的Entry赋值给e
tab[index] = new Entry<>(hash, key, value, e); //创建一个新的Entry
count++; //数量进行+1
}
总结下流程:
1.先对键值进行判断是否为null
2.通过(hash & 0x7FFFFFFF) % tab.length
;来确定哈希桶的位置
3.遍历桶中的元素,如果key相等,那么则进行覆盖
4.如果不相等,那么就调用addEntry,直接进行插入
5.在addEntry
方法中,先判断插入键值对后哈希表是否需要扩容,若需要则先扩容,然后重新计算哈希值;
6.最终进行插入
public synchronized V get(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode(); //计算下标的位置
int index = (hash & 0x7FFFFFFF) % tab.length;
//for循环来查找符合条件的value
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return (V)e.value;
}
}
return null;
}
总的来说就是先通过key的key.hashcode()来定位一个目标桶,在通过遍历链表获取响应的元素
// synchronized锁保证删除成功
public synchronized V remove(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length; //index是数组的索引值
@SuppressWarnings("unchecked")
Entry<K,V> e = (Entry<K,V>)tab[index];
//遍历单向链表找,删除对应节点
for(Entry<K,V> prev = null ; e != null ; prev = e, e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
modCount++; //修改值++
if (prev != null) {
prev.next = e.next;
} else {
tab[index] = e.next;
}
count--;
V oldValue = e.value;
e.value = null;
return oldValue;
}
}
return null;
}
/*8Increases the capacity of and internally reorganizes this hashtable, in order to accommodate and access its entries more efficiently. This method is called automatically when the number of keys in the hashtable exceeds this hashtable's capacity and load factor.*/
//增加此哈希表的容量并在内部重新组织此哈希表,以便更有效地容纳和访问其条目。
//当哈希表中的键数超过此哈希表的容量和负载因子时,将自动调用此方法。
protected void rehash() {
int oldCapacity = table.length;
Entry<?,?>[] oldMap = table;
// overflow-conscious code
int newCapacity = (oldCapacity << 1) + 1; //这里是关键的扩容点
if (newCapacity - MAX_ARRAY_SIZE > 0) {
if (oldCapacity == MAX_ARRAY_SIZE)
// Keep running with MAX_ARRAY_SIZE buckets
return;
newCapacity = MAX_ARRAY_SIZE;
}
Entry<?,?>[] newMap = new Entry<?,?>[newCapacity];
modCount++;
threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
table = newMap;
for (int i = oldCapacity ; i-- > 0 ;) {
for (Entry<K,V> old = (Entry<K,V>)oldMap[i] ; old != null ; ) {
Entry<K,V> e = old;
old = old.next;
int index = (e.hash & 0x7FFFFFFF) % newCapacity;
e.next = (Entry<K,V>)newMap[index];
newMap[index] = e;
}
}
}
重点是 int newCapacity = (oldCapacity << 1) + 1;
_左移一位,变为2倍,在加1_,所以扩容机制是2倍+1**;
手机扫一扫
移动阅读更方便
你可能感兴趣的文章