So I came across this article regarding hash tables that seems somewhat amazing given the length of time the original concept was thought to be established.
Yes, not exactly water cooler talk material as it is atm a theoretical advance in performance, still, I thought “why not get the opinion of some real Comp-Sci people”
and what this might mean for future real world applications.
It somehow reminded me of the bottle-necks in random write/rewrite speeds of PCI-E 5 SSD drives…when it already has data on it. Even though the bandwidth is there to write faster, it can’t because it still takes a set amount of time to randomly allocate the data to empty areas in the drive (along with moving/rearranging/removing data already present) on the drive to complete the operation.
Am I going off into left field or headed somewhere in the right direction?
I then found this article from 2014 which might hold a grain of relevance regarding hash tables in SSD operations (more than a bit outdated, I know):
“Design Patterns for Tunable and Efficient SSD based Indexes” (Note: this is a pdf)
Along with seeing that Hash Tables are used in most major computer programming languages:
So, any thoughts about how this efficiency improvement in handling hash table queries might have real world impact?
Thanks again!
And don’t think too hard on it, just thought I would ask out of curiosity…as it seems interesting! All input is welcome!