data structures

613
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 25 Jul 2012 06:35:00 UTC Data structures

Upload: jai-bishnoi

Post on 27-Oct-2014

443 views

Category:

Documents


5 download

TRANSCRIPT

Data structures

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 25 Jul 2012 06:35:00 UTC

ContentsArticlesIntroductionData structure Linked data structure Succinct data structure Implicit data structure Compressed data structure Search data structure Persistent data structure Concurrent data structure 1 1 3 6 8 9 10 11 18 21 21 29 32 61 63 66 70 73 74 75 80 83 83 89 91 92 93 96 97 99 109

Abstract data typesAbstract data type List Stack Queue Deque Priority queue Map Bidirectional map Multimap Set Tree

ArraysArray data structure Row-major order Dope vector Iliffe vector Dynamic array Hashed array tree Gap buffer Circular buffer Sparse array

Bit array Bitboard Parallel array Lookup table

110 115 119 121 127 127 143 146 148 150 156 161 161 169 179 181 184 185 190 194 209 214 218 232 235 240 244 266 270 275 279 279 290 294 295

ListsLinked list XOR linked list Unrolled linked list VList Skip list Self-organizing list

Binary treesBinary tree Binary search tree Self-balancing binary search tree Tree rotation Weight-balanced tree Threaded binary tree AVL tree Red-black tree AA tree Scapegoat tree Splay tree T-tree Rope Top Trees Tango Trees Van Emde Boas tree Cartesian tree Treap

B-treesB-tree B+ tree Dancing tree 2-3 tree

2-3-4 tree Queaps Fusion tree Bx-tree

296 299 303 307 310 310 313 319 324 329 329 332 333 336 339 341 344 344 351 356 361 364 365 368 369 369 371 373 373 376 377 378 378 381 382

HeapsHeap Binary heap Binomial heap Fibonacci heap 2-3 heap Pairing heap Beap Leftist tree Skew heap Soft heap d-ary heap

TriesTrie Radix tree Suffix tree Suffix array Compressed suffix array FM-index Generalised suffix tree B-trie Judy array Directed acyclic word graph

Multiway treesTernary search tree Andor tree (a,b)-tree Link/cut tree SPQR tree Spaghetti stack Disjoint-set data structure

Space-partitioning treesSpace partitioning Binary space partitioning Segment tree Interval tree Range tree Bin k-d tree Implicit k-d tree min/max kd-tree Adaptive k-d tree Quadtree Octree Linear octrees Z-order UB-tree R-tree R+ tree R* tree Hilbert R-tree X-tree Metric tree vP-tree BK-tree

386 386 387 392 396 401 403 405 413 416 417 418 424 426 426 431 432 438 439 442 449 449 450 451 452 452 465 473 476 476 477 481 483 488 491 493

HashesHash table Hash function Open addressing Lazy deletion Linear probing Quadratic probing Double hashing Cuckoo hashing Coalesced hashing Perfect hash function Universal hashing

Linear hashing Extendible hashing 2-choice hashing Pearson hashing FowlerNollVo hash function Bitstate hashing Bloom filter Locality preserving hashing Morton number Zobrist hashing Rolling hash Hash list Hash tree Prefix hash tree Hash trie Hash array mapped trie Distributed hash table Consistent hashing Stable hashing Koorde

498 499 505 505 506 508 509 520 521 526 527 528 529 531 532 532 533 539 541 541 544 544 546 548 551 553 557 559 560 561 562 567 567 578 579 582

GraphsGraph Adjacency list Adjacency matrix And-inverter graph Binary decision diagram Binary moment diagram Zero-suppressed decision diagram Propositional directed acyclic graph Graph-structured stack Scene graph

AppendixBig O notation Amortized analysis Locality of reference Standard Template Library

ReferencesArticle Sources and Contributors Image Sources, Licenses and Contributors 592 601

Article LicensesLicense 606

1

IntroductionData structureIn computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.[1][2] Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers. Data structures are used in almost every program or software system. Data a hash table structures provide a means to manage huge amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.

Overview An array data structure stores a number of elements of the same type in a specific order. They are accessed using an integer to specify which element is required (although the elements may be of almost any type). Arrays may be fixed-length or expandable. Record (also called tuple or struct) Records are among the simplest data structures. A record is a value that contains other values, typically in fixed number and sequence and typically indexed by names. The elements of records are usually called fields or members. A hash or dictionary or map is a more flexible variation on a record, in which name-value pairs can be added and deleted freely. Union. A union type definition will specify which of a number of permitted primitive types may be stored in its instances, e.g. "float or long integer". Contrast with a record, which could be defined to contain a float and an integer; whereas, in a union, there is only one value at a time. A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an additional field indicating its current type, for enhanced type safety. A set is an abstract data structure that can store specific values, without any particular order, and no repeated values. Values themselves are not retrieved from sets, rather one tests a value for membership to obtain a boolean "in" or "not in".

Data structure An object contains a number of data fields, like a record, and also a number of program code fragments for accessing or modifying them. Data structures not containing code, like those above, are called plain old data structure. Many others are possible, but they tend to be further variations and compounds of the above.

2

Basic principlesData structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an addressa bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations; while the linked data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways (as in XOR linking) The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations. This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly by the operations that may be performed on it, and the mathematical properties of those operations (including their space and time cost).

Language supportMost assembly languages and some low-level languages, such as BCPL(Basic Combined Programming Language), lack support for data structures. Many high-level programming languages, and some higher-level assembly languages, such as MASM, on the other hand, have special syntax or other built-in support for certain data structures, such as vectors (one-dimensional arrays) in the C language or multi-dimensional arrays in Pascal. Most programming languages feature some sorts of library mechanism that allows data structure implementations to be reused by different programs. Modern languages usually come with standard libraries that implement the most common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and Microsoft's .NET Framework. Modern languages also generally support modular programming, the separation between the interface of a library module and its implementation. Some provide opaque data types that allow clients to hide implementation details. Object-oriented programming languages, such as C++, Java and .NET Framework may use classes for this purpose. Many known data structures have concurrent versions that allow multiple computing threads to access the data structure simultaneously.

References[1] Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. 15 December 2004. Online version (http:/ / www. itl. nist. gov/ div897/ sqg/ dads/ HTML/ datastructur. html) Accessed May 21, 2009. [2] Entry data structure in the Encyclopdia Britannica (2009) Online entry (http:/ / www. britannica. com/ EBchecked/ topic/ 152190/ data-structure) accessed on May 21, 2009.

Further reading Peter Brass, Advanced Data Structures, Cambridge University Press, 2008. Donald Knuth, The Art of Computer Programming, vol. 1. Addison-Wesley, 3rd edition, 1997. Dinesh Mehta and Sartaj Sahni Handbook of Data Structures and Applications, Chapman and Hall/CRC Press, 2007. Niklaus Wirth, Algorithms and Data Structures, Prentice Hall, 1985.

Data structure

3

External links UC Berkeley video course on data structures (http://academicearth.org/courses/data-structures) Descriptions (http://nist.gov/dads/) from the Dictionary of Algorithms and Data Structures CSE.unr.edu (http://www.cse.unr.edu/~bebis/CS308/) Data structures course with animations (http://www.cs.auckland.ac.nz/software/AlgAnim/ds_ToC.html) Data structure tutorials with animations (http://courses.cs.vt.edu/~csonline/DataStructures/Lessons/index. html) An Examination of Data Structures from .NET perspective (http://msdn.microsoft.com/en-us/library/ aa289148(VS.71).aspx) Schaffer, C. Data Structures and Algorithm Analysis (http://people.cs.vt.edu/~shaffer/Book/C++ 3e20110915.pdf)

Linked data structureIn computer science, a linked data structure is a data structure which consists of a set of data records (nodes) linked together and organized by references (links or pointers). In linked data structures, the links are usually treated as special data types that can only be dereferenced or compared for equality. Linked data structures are thus contrasted with arrays and other data structures that require performing arithmetic operations on pointers. This distinction holds even when the nodes are actually implemented as elements of a single array, and the references are actually array indices: as long as no arithmetic is done on those indices, the data structure is essentially a linked one. Linking can be done in two ways - Using dynamic allocation and using array index linking. Linked data structures include linked lists, search trees, expression trees, and many other widely used data structures. They are also key building blocks for many efficient algorithms, such as topological sort[1] and set union-find.[2]

Common types of linked data structuresLinked listsA linked list is a collection of structures ordered not by their physical placement in memory but by logical links that are stored as part of the data in the structure itself. It is not necessary that it should be stored in the adjacent memory locations. Every structure has a data field and a address field. The Address field contains the address of its successor. Linked list can be singly, doubly or multiply linked and can either be linear or circular. Basic Properties Objects, called nodes, are linked in a linear sequence A reference to the first node of the list is always kept. This is called the 'head' or 'front'.[3]

A linked list with three nodes contain two fields each: an integer value and a link to the next node

Linked data structure

4

A linked list with a single node.

Example in Java This is an example of the node class used to store integers in a Java implementation of a linked list. public class IntNode { public int value; public IntNode link; public IntNode(int v) { value = v; } } Example in C This is an example of the node structure used for implementation of linked list in C. struct node { int val; struct node *next; }; This is an example using typedefs. typedef struct node_s node_t; struct node_s { int val; node_t *next; }; Note: A structure like this which contains a member that points to the same structure is called a self referential structure.

Linked data structure

5

Search treesA search tree is a tree data structure in whose nodes data values can be stored from some ordered set, which is such that in an in-order traversal of the tree the nodes are visited in ascending order of the stored values. Basic Properties Objects, called nodes, are stored in an ordered set. In-order traversal provides an ascending readout of the data in the tree Sub trees of the tree are in themselves, trees.

Advantages and disadvantagesAdvantages against arraysCompared to arrays, linked data structures allow more flexibility in organizing the data and in allocating space for it. In arrays, the size of the array must be specified precisely at the beginning, this can be a potential waste of memory. A linked data structure is built dynamically and never needs to be bigger than the programmer requires. It also requires no guessing in terms of how much space you must allocate when using a linked data structure. This is a feature that is key in saving wasted memory. In array, the array elements have to be in contiguous(connected and sequential) portion of memory. But in linked data structure, the reference to each node gives us the information where to find out the next one. The nodes of a linked data structure can also be moved individually to different locations without affecting the logical connections between them, unlike arrays. With due care, a process can add or delete nodes to one part of a data structure even while other processes are working on other parts. On the other hand, access to any particular node in a linked data structure requires following a chain of references that stored in it. If the structure has n nodes, and each node contains at most b links, there will be some nodes that cannot be reached in less than logb n steps. For many structures, some nodes may require worst case up to n1 steps. In contrast, many array data structures allow access to any element with a constant number of operations, independent of the number of entries. Broadly the implementation of these linked data structure is through dynamic data structures. It gives us the chance to use particular space again. Memory can be utilized more efficiently by using this data structures. Memory is allocated as per the need and when memory is not further needed, deallocation is done.

General disadvantagesLinked data structures may also incur in substantial memory allocation overhead (if nodes are allocated individually) and frustrate memory paging and processor caching algorithms (since they generally have poor locality of reference). In some cases, linked data structures may also use more memory (for the link fields) than competing array structures. This is because linked data structures are not contiguous. Instances of data can be found all over in memory, unlike arrays. In arrays, nth element can be accessed immediately, while in a linked data structure we have to follow multiple pointers so element access time varies according to where in the structure the element is. In some theoretical models of computation that enforce the constraints of linked structures, such as the pointer machine, many problems require more steps than in the unconstrained random access machine model.

Linked data structure

6

References[1] Donald Knuth, The Art of Computer Programming [2] Bernard A. Galler and Michael J. Fischer. An improved equivalence algorithm. Communications of the ACM, Volume 7, Issue 5 (May 1964), pages 301-303. The paper originating disjoint-set forests. ACM Digital Library (http:/ / portal. acm. org/ citation. cfm?doid=364099. 364331) [3] http:/ / www. cs. toronto. edu/ ~hojjat/ 148s07/ lectures/ week5/ 07linked. pdf

Succinct data structureIn computer science, a succinct data structure is data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson [1] to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, in which the size of the data structure depends upon the particular data being represented. Suppose that is the information-theoretical optimal number of bits needed to store some data. A representation of bits of space, bits of space, and bits of space. this data is called implicit if it takes succinct if it takes compact if it takes

Implicit structures are thus usually reduced to storing information using some permutation of the input data; the most well-known example of this is the heap.

Succinct dictionariesSuccinct indexable dictionaries, also called rank/select dictionaries, form the basis of a number of succinct representation techniques, including binary trees, -ary trees and multisets,[2] as well as suffix trees and arrays.[3] The basic problem is to store a subset represented as a bit array where of a universe iff , usually . An indexable dictionary supports the usual

methods on dictionaries (queries, and insertions/deletions in the dynamic case) as well as the following operations: for .[4]

There is a simple representation

which uses

bits of storage space (the original bit array and an

auxiliary structure) and supports rank and select in constant time. It uses an idea similar to that for range-minimum queries; there are a constant number of recursions before stopping at a subproblem of a limited size. The bit array is partitioned into large blocks of size bits and small blocks of size ; each such entry takes bits. For each large bits for a total stores the rank of block, the rank of its first bit is stored in a separate table of each of the

bits of storage. Within a large block, another directory

small blocks it contains. The difference here is that it only needs bits for each entry, since only the differences from the rank of the first bit in the

containing large block need to be stored. Thus, this table takes a total of lookup table ; this requires auxiliary tables take To answer a query for space.

bits. A for

can then be used that stores the answer to every possible rank query on a bit string of length space, this data structure supports rank queries in time and in constant time, a constant time algorithm computes

bits of storage space. Thus, since each of these bits of

Succinct data structure

7

In practice, the lookup table

can be replaced by bitwise operations and smaller tables to perform find the number

of bits set in the small blocks. This is often beneficial, since succinct data structures find their uses in large data sets, in which case cache misses become much more frequent and the chances of the lookup table being evicted from closer CPU caches becomes higher.[5] Select queries can be easily supported by doing a binary search on the same auxiliary structure used for rank; however, this takes time in the worst case. A more complicated structure using constant time.[6]

bits of additional storage can be used to support select in In practice, many of these solutions have hidden constants in the notation which dominate

before any asymptotic advantage becomes apparent; implementations using broadword operations and word-aligned blocks often perform better in practice.[7]

Entropy-compressed dictionariesThe space approach can be improved by noting that there are with exactly 1s), and thus distinct -subsets of (or binary strings of length using takes[9]

is an information theoretic lower bound

on the number of bits needed to store

. There is a succinct (static) dictionary which attains this bound, namely space.[2] This bound can be reduced to a space/time tradeoff by with queries taking

space.[8] This structure can be extended to support rank and select queries and

reducing the storage space of the dictionary to time.

ExamplesWhen a sequence of variable-length items needs to be encoded, the items can simply be placed one after another, with no delimiters. A separate binary string consisting of 1s in the positions where an item begins, and 0s every where else is encoded along with it. Given this string, the function can quickly determine where each item begins, given its index.[10] Another example is the representation of a binary tree: an arbitrary binary tree on nodes can be represented in bits while supporting a variety of operations on any node, which includes finding its parent, its left and right child, and returning the size of its subtree, each in constant time. The number of different binary trees on nodes is . For large , this is about ; thus we need at least about bits to encode it. A succinct binary tree therefore would occupy only bits per node.

References[1] Jacobson, G. J (1988). Succinct static data structures. [2] Raman, R.; V. Raman, S. S Rao (2002). "Succinct indexable dictionaries with applications to encoding k-ary trees and multisets" (http:/ / www. cs. cmu. edu/ afs/ cs. cmu. edu/ project/ aladdin/ wwwlocal/ hash/ RaRaRa02. pdf). Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms. pp.233242. ISBN0-89871-513-X. . [3] Sadakane, K.; R. Grossi (2006). "Squeezing succinct data structures into entropy bounds" (http:/ / www. dmi. unisa. it/ people/ cerulli/ www/ WSPages/ WSFiles/ Abs/ S3/ S33_abs_Grossi. pdf). Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm. pp.12301239. ISBN0-89871-605-5. . [4] Jacobson, G. (1989). Space-efficient static trees and graphs (http:/ / www. cs. cmu. edu/ afs/ cs/ project/ aladdin/ wwwlocal/ compression/ 00063533. pdf). . [5] Gonzlez, R.; S. Grabowski, V. Mkinen, G. Navarro (2005). "Practical implementation of rank and select queries" (http:/ / www. dcc. uchile. cl/ ~gnavarro/ algoritmos/ ps/ wea05. pdf). Poster Proceedings Volume of 4th Workshop on Efficient and Experimental Algorithms (WEA). pp.2738. . [6] Clark, D. (1998). Compact pat trees (https:/ / uwspace. uwaterloo. ca/ bitstream/ 10012/ 64/ 1/ nq21335. pdf). . [7] Vigna, S. (2008). "Broadword implementation of rank/select queries" (http:/ / sux. dsi. unimi. it/ paper. pdf). Experimental Algorithms: 154168. . [8] Brodnik, A.; J. I Munro (1999). "Membership in constant time and almost-minimum space" (http:/ / www. cs. cmu. edu/ afs/ cs. cmu. edu/ project/ aladdin/ wwwlocal/ compression/ BM99. pdf). SIAM J. Comput. 28 (5): 16271640. doi:10.1137/S0097539795294165. .

Succinct data structure[9] Ptracu, M. (2008). "Succincter" (http:/ / people. csail. mit. edu/ mip/ papers/ succinct/ succinct. pdf). Foundations of Computer Science, 2008. FOCS'08. IEEE 49th Annual IEEE Symposium on. pp.305313. . [10] Belazzougui, Djamal. "Hash, displace, and compress" (http:/ / cmph. sourceforge. net/ papers/ esa09. pdf). .

8

Implicit data structureIn computer science, an implicit data structure is a data structure that uses very little memory besides the actual data elements i.e. very little information other than main data is stored in these structures. These are storage schemes which retain no pointers and represent the file of n k-key records as a simple n by k array n thus retrieve faster. In implicit data structures the only structural information to be given is to allow the array to grow and shrink as n. No extra information is required. It is called "implicit" because most of the structure of the elements is expressed implicitly by their order. Another term used interchangeably is space efficient. Definitions of very little are vague and can mean from O(1) to O(log n) extra space. Everything is accessed in-place, by reading bits at various positions in the data. To achieve optimal coding, we use bits instead of bytes. Implicit data structures are frequently also succinct data structures. Although one may argue that disk space is no longer a problem and we should not concern ourselves with improving space utilization, the issue that implicit data structures are designed to improve is main memory utilization. Hard disks, or any other means of large data capacity, I/O devices, are orders of magnitudes slower than main memory. Hence, the higher percentage of a task can fit in buffers in main memory the less dependence is on slow I/O devices. Hence, if a larger chunk of an implicit data structure fits in main memory the operations performed on it can be faster even if the asymptotic running time is not as good as its space-oblivious counterpart. Furthermore, since the CPU-cache is usually much smaller than main-memory, implicit data structures can improve cache-efficiency and thus running speed, especially if the method used improves locality. Keys are scanned very efficiently. Downloading indexed data in mobiles becomes easier.

Implicit data structure for weighted elementFor presentation of elements with different weight several data structures are required.The structure uses one more location besides required for values of elements.The first structure supports worst case search time in terms of rank of weight of elements w.r.t multi-set of weights.If the elements are drawn from uniform distribution, then variation of this structure takes average time.The same result obtain for the data structures in which the intervals between consecutive values have access probabilities.

ExamplesExamples of implicit data structures include Binary heap Beap

Further reading See publications of Herv Brnnimann [1], J. Ian Munro [2], Greg Frederickson [3]

References[1] http:/ / photon. poly. edu/ ~hbr/ [2] http:/ / www. cs. uwaterloo. ca/ ~imunro/ [3] http:/ / www. cs. purdue. edu/ people/ gnf

Compressed data structure

9

Compressed data structureThe term compressed data structure arises in the computer science subfields of algorithms, data structures, and theoretical computer science. It refers to a data structure whose operations are roughly as fast as those of a conventional data structure for the problem, but whose size can be substantially smaller. The size of the compressed data structure is typically highly dependent upon the entropy of the data being represented. Important examples of compressed data structures include the compressed suffix array[1][2] and the FM-index,[3] both of which can represent an arbitrary text of characters T for pattern matching. Given any input pattern P, they support the operation of finding if and where P appears in T. The search time is proportional to the sum of the length of pattern P, a very slow-growing function of the length of the text T, and the number of reported matches. The space they occupy is roughly equal to the size of the text T in entropy-compressed form, such as that obtained by Prediction by Partial Matching or gzip. Moreover, both data structures are self-indexing, in that they can reconstruct the text T in a random access manner, and thus the underlying text T can be discarded. In other words, they simultaneously provide a compressed and quickly searchable representation of the text T. They represent a substantial space improvement over the conventional suffix tree and suffix array, which occupy many times more space than the size of T. They also support searching for arbitrary patterns, as opposed to the inverted index, which can support only word-based searches. In addition, inverted indexes do not have the self-indexing feature. An important related notion is that of a succinct data structure, which uses space roughly equal to the information-theoretic minimum, which is a worst-case notion of the space needed to represent the data. In contrast, the size of a compressed data structure depends upon the particular data being represented. When the data are compressible, as is often the case in practice for natural language text, the compressed data structure can occupy substantially less space than the information-theoretic minimum.

References[1] R. Grossi and J. S. Vitter, Compressed Suffix Arrays and Suffix Trees with Applications to Text Indexing and String Matching], Proceedings of the 32nd ACM Symposium on Theory of Computing, May 2000, 397-406. Journal version in SIAM Journal on Computing, 35(2), 2005, 378-407. [2] R. Grossi, A. Gupta, and J. S. Vitter, High-Order Entropy-Compressed Text Indexes, Proceedings of the 14th Annual SIAM/ACM Symposium on Discrete Algorithms, January 2003, 841-850. [3] P. Ferragina and G. Manzini, Opportunistic Data Structures with Applications, Proceedings of the 41st IEEE Symposium on Foundations of Computer Science, November 2000, 390-398. Journal version in Indexing Compressed Text, Journal of the ACM, 52(4), 2005, 552-581.

Search data structure

10

Search data structureIn computer science, a search data structure is any data structure that allows the efficient retrieval of specific items from a set of items, such as a specific record from a database. The simplest, most general, and least efficient search structure is merely an unordered sequential list of all the items. Locating the desired item in such a list, by the linear search method, inevitably requires a number of operations proportional to the number n of items, in the worst case as well as in the average case. Useful search data structures allow faster retrieval; however, they are limited to queries of some specific kind. Moreover, since the cost of building such structures is at least proportional to n, they only pay off if several queries are to be performed on the same database (or on a database that changes little between queries). Static search structures are designed for answering many queries on a fixed database; dynamic structures also allow insertion, deletion, or modification of items between successive queries. In the dynamic case, one must also consider the cost of fixing the search structure to account for the changes in the database.

ClassificationThe simplest kind of query is to locate a record that has a specific field (the key) equal to a specified value v. Other common kinds of query are "find the item with smallest (or largest) key value", "find the item with largest key value not exceeding v", "find all items with key values between specified bounds vmin and vmax". In certain databases the key values may be points in some multi-dimensional space. For example, the key may be a geographic position (latitude and longitude) on the Earth. In that case, common kinds of queries are find the record with a key closest to a given point v", or "find all items whose key lies at a given distance from v", or "find all items within a specified region R of the space". A common special case of the latter are simultaneous range queries on two or more simple keys, such as "find all employee records with salary between 50,000 and 100,000 and hired between 1995 and 2007".

Single ordered keys Array if the key values span a moderately compact interval. Priority-sorted list; see linear search Key-sorted array; see binary search Self-balancing binary search tree Hash table

Finding the smallest element Heap Asymptotic amortized worst-case analysis In this table, the asymptotic notation O(f(n)) means "not exceeding some fixed multiple of f(n) in the worst case."

Search data structure

11

Insert Unsorted array Value-indexed array Sorted array Unsorted linked list Sorted linked list O(1) O(1) O(n) O(1)* O(n)*

Delete O(1) O(1) O(n) O(1)* O(1)*

Search Find maximum Space usage O(n) O(1) O(n) O(n) O(n) O(n) O(n) O(n) O(n) O(n) O(n) O(n)

O(log n) O(1) O(n) O(n) O(n) O(1)

Self-balancing binary tree O(log n) O(log n) Heap Hash table

O(log n) O(log n) O(1) O(n)

O(log n) O(log n)** O(n) O(1) O(1) O(1)

* The cost to add or delete an element into a known location in the list (i.e. if you have an iterator to the location) is O(1). If you don't know the location, then you need to traverse the list to the location of deletion/insertion, which takes O(n) time. ** The deletion cost is O(log n) for the minimum or maximum, O(n) for an arbitrary element.

This table is only an approximate summary; for each data structure there are special situations and variants that may lead to different costs. Also two or more data structures can be combined to obtain lower costs.

Footnotes

Persistent data structureIn computing, a persistent data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure. (A persistent data structure is not a data structure committed to persistent storage, such as a disk; this is a different and unrelated sense of the word "persistent.") A data structure is partially persistent if all versions can be accessed but only the newest version can be modified. The data structure is fully persistent if every version can be both accessed and modified. If there is also a meld or merge operation that can create a new version from two previous versions, the data structure is called confluently persistent. Structures that are not persistent are called ephemeral.[1] These types of data structures are particularly common in logical and functional programming, and in a purely functional program all data is immutable, so all data structures are automatically fully persistent.[1] Persistent data structures can also be created using in-place updating of data and these may, in general, use less time or storage space than their purely functional counterparts. While persistence can be achieved by simple copying, this is inefficient in time and space, because most operations make only small changes to a data structure. A better method is to exploit the similarity between the new and old versions to share structure between them, such as using the same subtree in a number of tree structures. However, because it rapidly becomes infeasible to determine how many previous versions share which parts of the structure, and because it is often desirable to discard old versions, this necessitates an environment with garbage collection.

Persistent data structure

12

Partially persistentIn partially persistence model, we may query any previous version of the data structure, but we may only update the latest version. This implies a linear ordering among the versions. Three methods on balanced binary search tree:

Fat NodeFat node method is to record all changes made to node fields in the nodes themselves, without erasing old values of the fields. This requires that we allow nodes to become arbitrarily fat. In other words, each fat node contains the same information and pointer fields as an ephemeral node, along with space for an arbitrary number of extra field values. Each extra field value has an associated field name and a version stamp which indicates the version in which the named field was changed to have the specified value. Besides, each fat node has its own version stamp, indicating the version in which the node was created. The only purpose of nodes having version stamps is to make sure that each node only contains one value per field name per version. In order to navigate through the structure, each original field value in a node has a version stamp of zero. Complexity of Fat Node With using fat node method, it requires O(1) space for every modification: just store the new data. Each modification takes O(1) additional time to store the modification at the end of the modification history. This is an amortized time bound, assuming we store the modification history in a growable array. For access time, we must find the right version at each node as we traverse the structure. If we made m modifications, then each access operation has O(logm) slowdown, and a query in our region computation will take O(log2n) time. Since we have arranged the modification history, using time stamp, so it takes O(logm) time to find the last modification before an arbitrary time stamp.

Path CopyingPath copy is to make a copy of all nodes on the path which contains the node we about to insert or delete. Then you have to cascade the change back through the data structure: all nodes that pointed to the old node must be modified to point to the new node instead. These modifications cause more cascading changes, and so on, until we reach to the root. Maintain an array of roots indexed by timestamp. The data structure pointed to by time ts root is exactly time ts date structure. Complexity of Path Copying With m modifications, this costs O(logm) additive lookup time. Modification time and space are bounded by the size of the structure, since a single modification may cause the entire structure to be copied. Thats O(m) for one update, and thus O(n2) preprocessing time.

A combinationSleator, Tarjan et al. came up with a way to combine the advantages of fat nodes and path copying, getting O(1) access slowdown and O(1) modification space and time. In each node, we store one modification box. This box can hold one modification to the nodeeither a modification to one of the pointers, or to the nodes key, or to some other piece of node-specific dataand a timestamp for when that modification was applied. Initially, every nodes modification box is empty. Whenever we access a node, we check the modification box, and compare its timestamp against the access time. (The access time specifies the version of the data structure that we care about.) If the modification box is empty, or the access time is before the modification time, then we ignore the modification box and just deal with the normal

Persistent data structure part of the node. On the other hand, if the access time is after the modification time, then we use the value in the modification box, overriding that value in the node. (Say the modification box has a new left pointer. Then well use it instead of the normal left pointer, but well still use the normal right pointer.) Modifying a node works like this. (We assume that each modification touches one pointer or similar field.) If the nodes modification box is empty, then we fill it with the modification. Otherwise, the modification box is full. We make a copy of the node, but using only the latest values.(That is, we overwrite one of the nodes fields with the value that was stored in the modification box.) Then we perform the modification directly on the new node, without using the modification box. (We overwrite one of the new nodes fields, and its modification box stays empty.) Finally, we cascade this change to the nodes parent, just like path copying. (This may involve filling the parents modification box, or making a copy of the parent recursively. If the node has no parentits the rootwe add the new root to a sorted array of roots.) With this algorithm, given any time t, at most one modification box exists in the data structure with time t. Thus, a modification at time t splits the tree into three parts: one part contains the data from before time t, one part contains the data from after time t, and one part was unaffected by the modification. Complexity of the combination Time and space for modifications require amortized analysis. A modification takes O(1) amortized space, and O(1) amortized time. To see why, use a potential function ,where (T)is the number of full live nodes in T . The live nodes of T are just the nodes that are reachable from the current root at the current time (that is, after the last modification). The full live nodes are the live nodes whose modification boxes are full. Each modification involves some number of copies, say k, followed by 1 change to a modification box. (Well, not quiteyou could add a new rootbut that doesnt change the argument.) Consider each of the k copies. Each costs O(1) space and time, but decreases the potential function by one. (First, the node we copy must be full and live, so it contributes to the potential function. The potential function will only drop, however, if the old node isnt reachable in the new tree. But we know it isnt reachable in the new treethe next step in the algorithm will be to modify the nodes parent to point at the copy. Finally, we know the copys modification box is empty. Thus, weve replaced a full live node with an empty live node, and goes down by one.) The final step fills a modification box, which costs O(1) time and increases by one. Putting it all together, the change in is =1 k.Thus, weve paid O(k +)= O(1) space and O(k + +1) = O(1) time.

13

Fully persistentIn fully persistent model, both updates and queries are allowed on any version of the data structure.

Confluently persistentIn confluently persistent model, we use combinatiors to combine input of more than one previous versions to output a new single version. Rather than a branching tree, combinations of versions induce a DAG (direct acyclic graph) structure on the version graph.

Examples of persistent data structuresPerhaps the simplest persistent data structure is the singly linked list or cons-based list, a simple list of objects formed by each carrying a reference to the next in the list. This is persistent because we can take a tail of the list, meaning the last k items for some k, and add new nodes on to the front of it. The tail will not be duplicated, instead becoming shared between both the old list and the new list. So long as the contents of the tail are immutable, this sharing will be invisible to the program.

Persistent data structure Many common reference-based data structures, such as red-black trees,[2] and stacks,[3] can easily be adapted to create a persistent version. Some others need slightly more effort, for example: Queue, Double-ended queues (dequeue), Min-Dequeue (which have additional operation min returning minimal element in constant time without incurring additional complexity on standard operations of queuing and dequeuing on both ends), Random access list (with constant cons/head as single linked list, but with additional operation of random access with sub-linear, most often logarithmic, complexity), Random access queue, Random access double-ended queue and Random access stack (as well Random access Min-List, Min-Queue, Min-Dequeue, Min-Stack). There also exist persistent data structures which use destructible operations, making them impossible to implement efficiently in purely functional languages (like Haskell), but possible in languages like C or Java. These types of data structures can be avoided with proper design. One primary advantage to using purely persistent data structures is that they often behave better in multi-threaded environments.

14

Linked listsThis example is taken from Okasaki. See the bibliography. Singly linked lists are the bread-and-butter data structure in functional languages. In ML-derived languages and Haskell, they are purely functional because once a node in the list has been allocated, it cannot be modified, only copied or destroyed. Note that ML itself is not purely functional. Consider the two lists: xs = [0, 1, 2] ys = [3, 4, 5] These would be represented in memory by:

where a circle indicates a node in the list (the arrow out showing the second element of the node which is a pointer to another node). Now concatenating the two lists: zs = xs ++ ys results in the following memory structure:

Persistent data structure

15

Notice that the nodes in list xs have been copied, but the nodes in ys are shared. As a result, the original lists (xs and ys) persist and have not been modified. The reason for the copy is that the last node in xs (the node containing the original value 2) cannot be modified to point to the start of ys, because that would change the value of xs.

TreesThis example is taken from Okasaki. See the bibliography. Consider a binary tree used for fast searching, where every node has the recursive invariant that subnodes on the left are less than the node, and subnodes on the right are greater than the node. For instance, the set of data xs = [a, b, c, d, f, g, h] might be represented by the following binary search tree:

Persistent data structure

16

A function which inserts data into the binary tree and maintains the invariant is: fun insert (x, E) = T (E, x, E) | insert (x, s as T (a, y, b)) = if x < y then T (insert (x, a), y, b) else if x > y then T (a, y, insert (x, b)) else s After executing ys = insert ("e", xs) we end up with the following:

Persistent data structure

17

Notice two points: Firstly the original tree (xs) persists. Secondly many common nodes are shared between the old tree and the new tree. Such persistence and sharing is difficult to manage without some form of garbage collection (GC) to automatically free up nodes which have no live references, and this is why GC is a feature commonly found in functional programming languages.

Reference cyclesSince every value in a purely functional computation is built up out of existing values, it would seem that it is impossible to create a cycle of references. In that case, the reference graph (the graph of the references from object to object) could only be a directed acyclic graph. However, in most functional languages, functions can be defined recursively; this capability allows recursive structures using functional suspensions. In lazy languages, such as Haskell, all data structures are represented as implicitly suspended thunks; in these languages any data structure can be recursive because a value can be defined in terms of itself. Some other languages, such as OCaml, allow the explicit definition of recursive values.

References[1] Kaplan, Haim (2001). "Persistent data structures" (http:/ / www. math. tau. ac. il/ ~haimk/ papers/ persistent-survey. ps). Handbook on Data Structures and Applications (CRC Press). . [2] Neil Sarnak, Robert E. Tarjan (1986). "Planar Point Location Using Persistent Search Trees" (http:/ / www. link. cs. cmu. edu/ 15859-f07/ papers/ point-location. pdf). Communications of the ACM 29 (7): 669679. doi:10.1145/6138.6151. . [3] Chris Okasaki. Purely Functional Data Structures (thesis) (http:/ / www. cs. cmu. edu/ ~rwh/ theses/ okasaki. pdf). .

Further reading Persistent Data Structures and Managed References (http://www.infoq.com/presentations/ Value-Identity-State-Rich-Hickey) - video presentation by Rich Hickey on Clojure's use of persistent data structures and how they support concurrency

Persistent data structure Making Data Structures Persistent (http://www.cs.cmu.edu/~sleator/papers/Persistence.htm) by James R. Driscoll, Neil Sarnak, Daniel D. Sleator, Robert E. Tarjan Fully persistent arrays for efficient incremental updates and voluminous reads (http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.34.1317) Real-Time Deques, Multihead Turing Machines, and Purely Functional Programming (http://citeseerx.ist.psu. edu/viewdoc/summary?doi=10.1.1.51.2895) Purely functional data structures by Chris Okasaki, Cambridge University Press, 1998, ISBN 0-521-66350-4. Purely Functional Data Structures (http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf) thesis by Chris Okasaki (PDF format) Fully Persistent Lists with Catenation (http://www.cs.cmu.edu/~sleator/papers/fully-persistent-lists.pdf) by James R. Driscoll, Daniel D. Sleator, Robert E. Tarjan (PDF) Persistent Data Structures (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/ 6-854j-advanced-algorithms-fall-2005/lecture-notes/persistent.pdf) from MIT open course Advanced Algorithms (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/ 6-854j-advanced-algorithms-fall-2005)

18

External links Lightweight Java implementation of Persistent Red-Black Trees (http://wiki.edinburghhacklab.com/ PersistentRedBlackTreeSet)

Concurrent data structureIn computer science, a concurrent data structure is a particular way of storing and organizing data for access by multiple computing threads (or processes) on a computer. Historically, such data structures were used on uniprocessor machines with operating systems that supported multiple computing threads (or processes). The term concurrency captured the multiplexing/interleaving of the threads' operations on the data by the operating system, even though the processors never issued two operations that accessed the data simultaneously. Today, as multiprocessor computer architectures that provide parallelism become the dominant computing platform (through the proliferation of multi-core processors), the term has come to stand mainly for data structures that can be accessed by multiple threads which may actually access the data simultaneously because they run on different processors that communicate with one another. The concurrent data structure (sometimes also called a shared data structure) is usually considered to reside in an abstract storage environment called shared memory, though this memory may be physically implemented as either a "tightly coupled" or a distributed collection of storage modules.

Basic principlesConcurrent data structures, intended for use in parallel or distributed computing environments, differ from "sequential" data structures, intended for use on a processor machine, in several ways .[1] Most notably, in a sequential environment one specifies the data structure's properties and checks that they are implemented correctly, by providing safety properties. In a concurrent environment, the specification must also describe liveness properties which an implementation must provide. Safety properties usually state that something bad never happens, while liveness properties state that something good keeps happening. These properties can be expressed, for example, using Linear Temporal Logic. The type of liveness requirements tend to define the data structure. The method calls can be blocking or non-blocking. Data structures are not restricted to one type or the other, and can allow combinations where some

Concurrent data structure method calls are blocking and others are non-blocking (examples can be found in the Java concurrency software library). The safety properties of concurrent data structures must capture their behavior given the many possible interleavings of methods called by different threads. It is quite intuitive to specify how abstract data structures behave in a sequential setting in which there are no interleavings. Therefore, many mainstream approaches for arguing the safety properties of a concurrent data structure (such as serializability, linearizability, sequential consistency, and quiescent consistency [1]) specify the structures properties sequentially, and map its concurrent executions to a collection of sequential ones. In order to guarantee the safety and liveness properties, concurrent data structures must typically (though not always) allow threads to reach consensus as to the results of their simultaneous data access and modification requests. To support such agreement, concurrent data structures are implemented using special primitive synchronization operations (see synchronization primitives) available on modern multiprocessor machines that allow multiple threads to reach consensus. This consensus can be reached achieved in a blocking manner by using locks, or without locks, in which case it is non-blocking. There is a wide body of theory on the design of concurrent data structures (see bibliographical references).

19

Design and ImplementationConcurrent data structures are significantly more difficult to design and to verify as being correct than their sequential counterparts. The primary source of this additional difficulty is concurrency, exacerbated by the fact that threads must be thought of as being completely asynchronous: they are subject to operating system preemption, page faults, interrupts, and so on. On today's machines, the layout of processors and memory, the layout of data in memory, the communication load on the various elements of the multiprocessor architecture all influence performance. Furthermore, there is a tension between correctness and performance: algorithmic enhancements that seek to improve performance often make it more difficult to design and verify a correct data structure implementation. A key measure for performance is scalability, captured by the speedup of the implementation. Speedup is a measure of how effectively the application is utilizing the machine it is running on. On a machine with P processors, the speedup is the ratio of the structures execution time on a single processor to its execution time on T processors. Ideally, we want linear speedup: we would like to achieve a speedup of P when using P processors. Data structures whose speedup grows with P are called scalable. The extent to which one can scale the performance of a concurrent data structure is captured by a formula known as Amdahl's law and more refined versions of it such as Gustafson's law. A key issue with the performance of concurrent data structures is the level of memory contention: the overhead in traffic to and from memory as a result of multiple threads concurrently attempting to access the same locations in memory. This issue is most acute with blocking implementations in which locks control access to memory. In order to acquire a lock, a thread must repeatedly attempt to modify that location. On a cache-coherent multiprocessor (one in which processors have local caches that are updated by hardware in order to keep them consistent with the latest values stored) this results in long waiting times for each attempt to modify the location, and is exacerbated by the additional memory traffic associated with unsuccessful attempts to acquire the lock.

Concurrent data structure

20

References[1] Mark Moir and Nir Shavit (2007). " Concurrent Data Structures (http:/ / www. cs. tau. ac. il/ ~shanir/ concurrent-data-structures. pdf)". In Dinesh Metha and Sartaj Sahni. 'Handbook of Data Structures and Applications' (1st ed.). Chapman and Hall/CRC Press. pp.47-14 47-30.

Further reading Nancy Lynch "Distributed Computing" Hagit Attiya and Jennifer Welch "Distributed Computing: Fundamentals, Simulations And Advanced Topics, 2nd Ed" Doug Lea, "Concurrent Programming in Java: Design Principles and Patterns" Maurice Herlihy and Nir Shavit, "The Art of Multiprocessor Programming" Mattson, Sanders, and Massingil "Patterns for Parallel Programming"

External links Multithreaded data structures for parallel computing, Part 1 (http://www.ibm.com/developerworks/aix/ library/au-multithreaded_structures1/index.html) (Designing concurrent data structures) by Arpan Sen Multithreaded data structures for parallel computing: Part 2 (http://www.ibm.com/developerworks/aix/ library/au-multithreaded_structures2/index.html) (Designing concurrent data structures without mutexes) by Arpan Sen libcds (http://libcds.sourceforge.net/) - C++ library of lock-free containers and safe memory reclamation schema

21

Abstract data typesAbstract data typeIn computer science, an abstract data type (ADT) is a mathematical model for a certain class of data structures that have similar behavior; or for certain data types of one or more programming languages that have similar semantics. An abstract data type is defined indirectly, only by the operations that may be performed on it and by mathematical constraints on the effects (and possibly cost) of those operations.[1] For example, an abstract stack data structure could be defined by three operations: push, that inserts some data item onto the structure, pop, that extracts an item from it (with the constraint that each pop always returns the most recently pushed item that has not been popped yet), and peek, that allows data on top of the structure to be examined without removal. When analyzing the efficiency of algorithms that use stacks, one may also specify that all operations take the same time no matter how many items have been pushed into the stack, and that the stack uses a constant amount of storage for each element. Abstract data types are purely theoretical entities, used (among other things) to simplify the description of abstract algorithms, to classify and evaluate data structures, and to formally describe the type systems of programming languages. However, an ADT may be implemented by specific data types or data structures, in many ways and in many programming languages; or described in a formal specification language. ADTs are often implemented as modules: the module's interface declares procedures that correspond to the ADT operations, sometimes with comments that describe the constraints. This information hiding strategy allows the implementation of the module to be changed without disturbing the client programs. The term abstract data type can also be regarded as a generalised approach of a number of algebraic structures, such as lattices, groups, and rings.[2] This can be treated as part of subject area of artificial intelligence. The notion of abstract data types is related to the concept of data abstraction, important in object-oriented programming and design by contract methodologies for software development .

Defining an abstract data type (ADT)An abstract data type is defined as a mathematical model of the data objects that make up a data type as well as the functions that operate on these objects. There are no standard conventions for defining them. A broad division may be drawn between "imperative" and "functional" definition styles.

Imperative abstract data type definitionsIn the "imperative" view, which is closer to the philosophy of imperative programming languages, an abstract data structure is conceived as an entity that is mutable meaning that it may be in different states at different times. Some operations may change the state of the ADT; therefore, the order in which operations are evaluated is important, and the same operation on the same entities may have different effects if executed at different times just like the instructions of a computer, or the commands and procedures of an imperative language. To underscore this view, it is customary to say that the operations are executed or applied, rather than evaluated. The imperative style is often used when describing abstract algorithms. This is described by Donald E. Knuth and can be referenced from here The Art of Computer Programming.

Abstract data type Abstract variable Imperative ADT definitions often depend on the concept of an abstract variable, which may be regarded as the simplest non-trivial ADT. An abstract variable V is a mutable entity that admits two operations: store(V,x) where x is a value of unspecified nature; and fetch(V), that yields a value; with the constraint that fetch(V) always returns the value x used in the most recent store(V,x) operation on the same variable V. As in so many programming languages, the operation store(V,x) is often written V x (or some similar notation), and fetch(V) is implied whenever a variable V is used in a context where a value is required. Thus, for example, V V + 1 is commonly understood to be a shorthand for store(V,fetch(V) + 1). In this definition, it is implicitly assumed that storing a value into a variable U has no effect on the state of a distinct variable V. To make this assumption explicit, one could add the constraint that if U and V are distinct variables, the sequence { store(U,x); store(V,y) } is equivalent to { store(V,y); store(U,x) }. More generally, ADT definitions often assume that any operation that changes the state of one ADT instance has no effect on the state of any other instance (including other instances of the same ADT) unless the ADT axioms imply that the two instances are connected (aliased) in that sense. For example, when extending the definition of abstract variable to include abstract records, the operation that selects a field from a record variable R must yield a variable V that is aliased to that part of R. The definition of an abstract variable V may also restrict the stored values x to members of a specific set X, called the range or type of V. As in programming languages, such restrictions may simplify the description and analysis of algorithms, and improve their readability. Note that this definition does not imply anything about the result of evaluating fetch(V) when V is un-initialized, that is, before performing any store operation on V. An algorithm that does so is usually considered invalid, because its effect is not defined. (However, there are some important algorithms whose efficiency strongly depends on the assumption that such a fetch is legal, and returns some arbitrary value in the variable's range.) Instance creation Some algorithms need to create new instances of some ADT (such as new variables, or new stacks). To describe such algorithms, one usually includes in the ADT definition a create() operation that yields an instance of the ADT, usually with axioms equivalent to the result of create() is distinct from any instance S in use by the algorithm. This axiom may be strengthened to exclude also partial aliasing with other instances. On the other hand, this axiom still allows implementations of create() to yield a previously created instance that has become inaccessible to the program.

22

Abstract data type Preconditions, postconditions, and invariants In imperative-style definitions, the axioms are often expressed by preconditions, that specify when an operation may be executed; postconditions, that relate the states of the ADT before and after the execution of each operation; and invariants, that specify properties of the ADT that are not changed by the operations. Example: abstract stack (imperative) As another example, an imperative definition of an abstract stack could specify that the state of a stack S can be modified only by the operations push(S,x), where x is some value of unspecified nature; and pop(S), that yields a value as a result; with the constraint that For any value x and any abstract variable V, the sequence of operations { push(S,x); V pop(S) } is equivalent to { V x }; Since the assignment { V x }, by definition, cannot change the state of S, this condition implies that { V pop(S) } restores S to the state it had before the { push(S,x) }. From this condition and from the properties of abstract variables, it follows, for example, that the sequence { push(S,x); push(S,y); U pop(S); push(S,z); V pop(S); W pop(S); } where x,y, and z are any values, and U, V, W are pairwise distinct variables, is equivalent to { U y; V z; W x } Here it is implicitly assumed that operations on a stack instance do not modify the state of any other ADT instance, including other stacks; that is, For any values x,y, and any distinct stacks S and T, the sequence { push(S,x); push(T,y) } is equivalent to { push(T,y); push(S,x) }. A stack ADT definition usually includes also a Boolean-valued function empty(S) and a create() operation that returns a stack instance, with axioms equivalent to create() S for any stack S (a newly created stack is distinct from all previous stacks) empty(create()) (a newly created stack is empty) not empty(push(S,x)) (pushing something into a stack makes it non-empty) Single-instance style Sometimes an ADT is defined as if only one instance of it existed during the execution of the algorithm, and all operations were applied to that instance, which is not explicitly notated. For example, the abstract stack above could have been defined with operations push(x) and pop(), that operate on "the" only existing stack. ADT definitions in this style can be easily rewritten to admit multiple coexisting instances of the ADT, by adding an explicit instance parameter (like S in the previous example) to every operation that uses or modifies the implicit instance. On the other hand, some ADTs cannot be meaningfully defined without assuming multiple instances. This is the case when a single operation takes two distinct instances of the ADT as parameters. For an example, consider augmenting the definition of the stack ADT with an operation compare(S,T) that checks whether the stacks S and T contain the same items in the same order.

23

Abstract data type

24

Functional ADT definitionsAnother way to define an ADT, closer to the spirit of functional programming, is to consider each state of the structure as a separate entity. In this view, any operation that modifies the ADT is modeled as a mathematical function that takes the old state as an argument, and returns the new state as part of the result. Unlike the "imperative" operations, these functions have no side effects. Therefore, the order in which they are evaluated is immaterial, and the same operation applied to the same arguments (including the same input states) will always return the same results (and output states). In the functional view, in particular, there is no way (or need) to define an "abstract variable" with the semantics of imperative variables (namely, with fetch and store operations). Instead of storing values into variables, one passes them as arguments to functions. Example: abstract stack (functional) For example, a complete functional-style definition of a stack ADT could use the three operations: push: takes a stack state and an arbitrary value, returns a stack state; top: takes a stack state, returns a value; pop: takes a stack state, returns a stack state; with the following axioms: top(push(s,x)) = x (pushing an item onto a stack leaves it at the top) pop(push(s,x)) = s (pop undoes the effect of push) In a functional-style definition there is no need for a create operation. Indeed, there is no notion of "stack instance". The stack states can be thought of as being potential states of a single stack structure, and two stack states that contain the same values in the same order are considered to be identical states. This view actually mirrors the behavior of some concrete implementations, such as linked lists with hash cons. Instead of create(), a functional definition of a stack ADT may assume the existence of a special stack state, the empty stack, designated by a special symbol like or "()"; or define a bottom() operation that takes no arguments and returns this special stack state. Note that the axioms imply that push(,x) In a functional definition of a stack one does not need an empty predicate: instead, one can test whether a stack is empty by testing whether it is equal to . Note that these axioms do not define the effect of top(s) or pop(s), unless s is a stack state returned by a push. Since push leaves the stack non-empty, those two operations are undefined (hence invalid) when s = . On the other hand, the axioms (and the lack of side effects) imply that push(s,x) = push(t,y) if and only if x = y and s = t. As in some other branches of mathematics, it is customary to assume also that the stack states are only those whose existence can be proved from the axioms in a finite number of steps. In the stack ADT example above, this rule means that every stack is a finite sequence of values, that becomes the empty stack () after a finite number of pops. By themselves, the axioms above do not exclude the existence of infinite stacks (that can be poped forever, each time yielding a different state) or circular stacks (that return to the same state after a finite number of pops). In particular, they do not exclude states s such that pop(s) = s or push(s,x) = s for some x. However, since one cannot obtain such stack states with the given operations, they are assumed "not to exist".

Abstract data type

25

Advantages of abstract data typing Encapsulation Abstraction provides a promise that any implementation of the ADT has certain properties and abilities; knowing these is all that is required to make use of an ADT object. The user does not need any technical knowledge of how the implementation works to use the ADT. In this way, the implementation may be complex but will be encapsulated in a simple interface when it is actually used. Localization of change Code that uses an ADT object will not need to be edited if the implementation of the ADT is changed. Since any changes to the implementation must still comply with the interface, and since code using an ADT may only refer to properties and abilities specified in the interface, changes may be made to the implementation without requiring any changes in code where the ADT is used. Flexibility Different implementations of an ADT, having all the same properties and abilities, are equivalent and may be used somewhat interchangeably in code that uses the ADT. This gives a great deal of flexibility when using ADT objects in different situations. For example, different implementations of an ADT may be more efficient in different situations; it is possible to use each in the situation where they are preferable, thus increasing overall efficiency.

Typical operationsSome operations that are often specified for ADTs (possibly under other names) are compare(s,t), that tests whether two structures are equivalent in some sense; hash(s), that computes some standard hash function from the instance's state; print(s) or show(s), that produces a human-readable representation of the structure's state. In imperative-style ADT definitions, one often finds also create(), that yields a new instance of the ADT; initialize(s), that prepares a newly created instance s for further operations, or resets it to some "initial state"; copy(s,t), that puts instance s in a state equivalent to that of t; clone(t), that performs s new(), copy(s,t), and returns s; free(s) or destroy(s), that reclaims the memory and other resources used by s; The free operation is not normally relevant or meaningful, since ADTs are theoretical entities that do not "use memory". However, it may be necessary when one needs to analyze the storage used by an algorithm that uses the ADT. In that case one needs additional axioms that specify how much memory each ADT instance uses, as a function of its state, and how much of it is returned to the pool by free.

ExamplesSome common ADTs, which have proved useful in a great variety of applications, are Container Deque List Map Multimap

Multiset Priority queue Queue

Abstract data type Set Stack String Tree

26

Each of these ADTs may be defined in many ways and variants, not necessarily equivalent. For example, a stack ADT may or may not have a count operation that tells how many items have been pushed and not yet popped. This choice makes a difference not only for its clients but also for the implementation.

ImplementationImplementing an ADT means providing one procedure or function for each abstract operation. The ADT instances are represented by some concrete data structure that is manipulated by those procedures, according to the ADT's specifications. Usually there are many ways to implement the same ADT, using several different concrete data structures. Thus, for example, an abstract stack can be implemented by a linked list or by an array. An ADT implementation is often packaged as one or more modules, whose interface contains only the signature (number and types of the parameters and results) of the operations. The implementation of the module namely, the bodies of the procedures and the concrete data structure used can then be hidden from most clients of the module. This makes it possible to change the implementation without affecting the clients. When implementing an ADT, each instance (in imperative-style definitions) or each state (in functional-style definitions) is usually represented by a handle of some sort.[3] Modern object-oriented languages, such as C++ and Java, support a form of abstract data types. When a class is used as a type, it is an abstract type that refers to a hidden representation. In this model an ADT is typically implemented as a class, and each instance of the ADT is an object of that class. The module's interface typically declares the constructors as ordinary procedures, and most of the other ADT operations as methods of that class. However, such an approach does not easily encapsulate multiple representational variants found in an ADT. It also can undermine the extensibility of object-oriented programs. In a pure object-oriented program that uses interfaces as types, types refer to behaviors not representations.

Example: implementation of the stack ADTAs an example, here is an implementation of the stack ADT above in the C programming language. Imperative-style interface An imperative-style interface might be: typedef struct stack_Rep stack_Rep; representation (an opaque record). */ typedef stack_Rep *stack_T; instance (an opaque pointer). */ typedef void *stack_Item; stored in stack (arbitrary address). */ stack_T stack_create(void); instance, initially empty. */ void stack_push(stack_T s, stack_Item e); the stack. */ stack_Item stack_pop(stack_T s); the stack and return it . */ /* Type: instance /* Type: handle to a stack /* Type: value that can be

/* Create new stack /* Add an item at the top of /* Remove the top item from

Abstract data type int stack_empty(stack_T ts); empty. */ This implementation could be used in the following manner: #include stack_T t = stack_create(); int foo = 17; t = stack_push(t, &foo); stack. */ void *e = stack_pop(t); the stack. */ if (stack_empty(t)) { } /* Include the stack interface. */ /* Create a stack instance. */ /* An arbitrary datum. */ /* Push the address of 'foo' onto the /* Check whether stack is

27

/* Get the top item and delete it from /* Do something if stack is empty. */

This interface can be implemented in many ways. The implementation may be arbitrarily inefficient, since the formal definition of the ADT, above, does not specify how much space the stack may use, nor how long each operation should take. It also does not specify whether the stack state t continues to exist after a call s pop(t). In practice the formal definition should specify that the space is proportional to the number of items pushed and not yet popped; and that every one of the operations above must finish in a constant amount of time, independently of that number. To comply with these additional specifications, the implementation could use a linked list, or an array (with dynamic resizing) together with two integers (an item count and the array size) Functional-style interface Functional-style ADT definitions are more appropriate for functional programming languages, and vice-versa. However, one can provide a functional style interface even in an imperative language like C. For example: typedef struct stack_Rep stack_Rep; representation (an opaque record). */ typedef stack_Rep *stack_T; state (an opaque pointer). */ typedef void *stack_Item; address). */ /* Type: stack state /* Type: handle to a stack /* Type: item (arbitrary

stack_T stack_empty(void); /* Returns the empty stack state. */ stack_T stack_push(stack_T s, stack_Item x); /* Adds x at the top of s, returns the resulting state. */ stack_Item stack_top(stack_T s); /* Returns the item currently at the top of s. */ stack_T stack_pop(stack_T s); /* Remove the top item from s, returns the resulting state. */ The main problem is that C lacks garbage collection, and this makes this style of programming impractical; moreover, memory allocation routines in C are slower than allocation in a typical garbage collector, thus the performance impact of so many allocations is even greater.

Abstract data type

28

ADT librariesMany modern programming languages, such as C++ and Java, come with standard libraries that implement several common ADTs, such as those listed above.

Built-in abstract data typesThe specification of some programming languages is intentionally vague about the representation of certain built-in data types, defining only the operations that can be done on them. Therefore, those types can be viewed as "built-in ADTs". Examples are the arrays in many scripting languages, such as Awk, Lua, and Perl, which can be regarded as an implementation of the Map or Table ADT.

References[1] Barbara Liskov, Programming with Abstract Data Types, in Proceedings of the ACM SIGPLAN Symposium on Very High Level Languages, pp. 50--59, 1974, Santa Monica, California [2] Rudolf Lidl (2004). Abstract Algebra. Springer. ISBN81-8128-149-7., Chapter 7,section 40. [3] Robert Sedgewick (1998). Algorithms in C. Addison/Wesley. ISBN0-201-31452-5., definition 4.4.

Further Mitchell, John C.; Plotkin, Gordon (July 1988). "Abstract Types Have Existential Type" (http://theory.stanford. edu/~jcm/papers/mitch-plotkin-88.pdf). ACM Transactions on Programming Languages and Systems 10 (3).

External links Abstract data type (http://www.nist.gov/dads/HTML/abstractDataType.html) in NIST Dictionary of Algorithms and Data Structures

List

29

ListIn computer science, a list or sequence is an abstract data type that implements an ordered collection of values, where the same value may occur more than once. An instance of a list is a computer representation of the mathematical concept of a finite sequence. Each instance of a value in the list is usually called an item, entry, or element of the list; if the same value occurs multiple times, each occurrence is considered a distinct item. The name list is also used for several concrete data structures that can be used to implement abstract lists, especially linked lists. The so-called static list structures allow only inspection and enumeration of the values. A mutable or dynamic list may allow items to be inserted, replaced, or deleted during the list's existence.A singly linked list structure, implementing a list with 3 integer elements.

Many programming languages provide support for list data types, and have special syntax and semantics for lists and list operations. A list can often be constructed by writing the items in sequence, separated by commas, semicolons, or spaces, within a pair of delimiters such as parentheses '()', brackets, '[]', braces '{}', or angle brackets ''. Some languages may allow list types to be indexed or sliced like array types. In object-oriented programming languages, lists are usually provided as instances of subclasses of a generic "list" class. List data types are often implemented using arrays or linked lists of some sort, but other data structures may be more appropriate for some applications. In some contexts, such as in Lisp programming, the term list may refer specifically to a linked list rather than an array. In type theory and functional programming, abstract lists are usually defined inductively by four operations: nil that yields the empty list, cons, which adds an item at the beginning of a list, head, that returns the first element of a list, and tail that returns a list minus its first element. Formally, Peano's natural numbers can be defined as abstract lists with elements of unit type.

OperationsImplementation of the list data structure may provide some of the following operations: a constructor for creating an empty list; an operation for testing whether or not a list is empty; an operation for prepending an entity to a list an operation for appending an entity to a list an operation for determining the first component (or the "head") of a list an operation for referring to the list consisting of all the components of a list except for its first (this is called the "tail" of the list.)

CharacteristicsLists have the following properties: The size of lists. It indicates how many elements there are in the list. Equality of lists: In mathematics, sometimes equality of lists is defined simply in terms of object identity: two lists are equal if and only if they are the same object. In modern programming languages, equality of lists is normally defined in terms of structural equality of the corresponding entries, except that if the lists are typed, then the list types may also be relevant. Lists may be typed. This implies that the entries in a list must have types that are compatible with the list's type. It is common that lists are typed when they are implemented using arrays.

List Each element in the list has an index. The first element commonly has index 0 or 1 (or some other predefined integer). Subsequent elements have indices that are 1 higher than the previous element. The last element has index + 1. It is possible to retrieve the element at a particular index. It is possible to traverse the list in the order of increasing index. It is possible to change the element at a particular index to a different value, without affecting any other elements. It is possible to insert an element at a particular index. The indices of higher elements at that are increased by 1. It is possible to remove an element at a particular index. The indices of higher elements at that are decreased by 1.

30

ImplementationsLists are typically implemented either as linked lists (either singly or doubly linked) or as arrays, usually variable length or dynamic arrays. The standard way of implementing lists, originating with the programming language Lisp, is to have each element of the list contain both its value and a pointer indicating the location of the next element in the list. This results in either a linked list or a tree, depending on whether the list has nested sublists. Some older Lisp implementations (such as the Lisp implementation of the Symbolics 3600) also supported "compressed lists" (using CDR coding) which had a special internal representation (invisible to the user). Lists can be manipulated using iteration or recursion. The former is often preferred in imperative programming languages, while the latter is the norm in functional languages. Lists can be implemented as self-balancing binary search trees holding index-value pairs, providing equal-time access to any element (e.g. all residing in the fringe, and internal nodes storing the right-most child's index, used to guide the search), taking the time logarithmic in the list's size, but as long as it doesn't change much will provide the illusion of random access and enable swap, prefix and append operations in logarithmic time as well.

Programming language supportSome languages do not offer a list data structure, but offer the use of associative arrays or some kind of table to emulate lists. For example, Lua provides tables. Although Lua stores lists that have numerical indices as arrays internally, they still appear as hash tables. In Lisp, lists are the fundamental data type and can represent both program code and data. In most dialects, the list of the first three prime numbers could be written as (list 2 3 5). In several dialects of Lisp, including Scheme, a list is a collection of pairs, consisting of a value and a pointer to the next pair (or null value), making a singly linked list.

ApplicationsAs the name implies, lists can be used to store a list of records. The items in a list can be sorted for the purpose of fast search (binary search). Because in computing, lists are easier to realize than sets, a finite set in mathematical sense can be realized as a list with additional restrictions, that is, duplicate elements are disallowed and such that order is irrelevant. If the list is sorted, it speeds up determining if a given item is already in the set but in order to ensure the order, it requires more time to add new entry to the list. In efficient implementations, however, sets are implemented using self-balancing binary search trees or hash tables, rather than a list.

List

31

Abstract definitionThe abstract list type L with elements of some type E (a monomorphic list) is defined by the following functions: nil: () L cons: E L L first: L E rest: L L with the axioms first (cons (e, l)) = e rest (cons (e, l)) = l for any element e and any list l. It is implicit that cons (e, l) l cons (e, l) e cons (e1, l1) = cons (e2, l2) if e1 = e2 and l1 = l2 Note that first (nil ()) and rest (nil ()) are not defined. These axioms are equivalent to those of the abstract stack data type. In type theory, the above definition is more simply regarded as an inductive type defined in terms of constructors: nil and cons. In algebraic terms, this can be represented as the transformation 1 + E L L. first and rest are then obtained by pattern matching on the cons constructor and separately handling the nil case.

The list monadThe list type forms a monad with the following functions (using E* rather than L to represent monomorphic lists with elements of type E):

where append is defined as:

Alternatively, the monad may be defined in terms of operations return, fmap and join, with:

Note that fmap, join, append and bind are well-defined, since they're applied to progressively deeper arguments at each recursive call. The list type is an additive monad, with nil as the monadic zero and append as monadic sum. Lists form a monoid under the append operation. The identity element of the monoid is the empty list, nil. In fact, this is the free monoid over the set of list elements.

Stack

32

StackIn computer science, a stack is a last in, first out (LIFO) abstract data type and linear data structure. A stack can have any abstract data type as an element, but is characterized by two fundamental operations, called push and pop (or pull). The push operation adds a new item to the top of the stack, or initializes the stack if it is empty. If the stack is full and does not contain enough space to accept the given item, the stack is then considered to be in an overflow state. The pop operation removes an item from the top of the stack. A pop either reveals previously concealed items, or results in an empty stack, but if the stack is empty then it goes into Simple representation of a stack underflow state (It means no items are present in stack to be removed). A stack pointer is the register which holds the value of the stack. The stack pointer always points to the top value of the stack. A stack is a restricted data structure, because only a small number of operations are performed on it. The nature of the pop and push operations also means that stack elements have a natural order. Elements are removed from the stack in the reverse order to the order of their addition: therefore, the lower elements are those that have been on the stack the longest.[1]

HistoryThe stack was first proposed in 1946, in the computer design of Alan M. Turing (who used the terms "bury" and "unbury") as a means of calling and returning from subroutines. In 1957, the Germans Klaus Samelson and Friedrich L. Bauer patented the idea.[2] The same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1957.[3]

Abstract definitionA stack is a basic computer science data structure and can be defined in an abstract, implementation-free manner, or it can be generally defined as a linear list of items in which all additions and deletion are restricted to one end that is Top. This is a VDM (Vienna Development Method) description of a stack:[4] Function signatures: init: -> Stack push: N x Stack -> Stack top: Stack -> (N U ERROR) remove: Stack -> Stack isempty: Stack -> Boolean (where N indicates an element (natural numbers in this case), and U indicates set union) Semantics: top(init()) = ERROR top(push(i,s)) = i remove(init()) = init() remove(push(i, s)) = s

Stack isempty(init()) = true isempty(push(i, s)) = false

33

Inessential operationsIn many implementations, a stack has more operations than "push" and "pop". An example is "top of stack", or "peek", which obverses the top-most element without removing it from the stack.[5] Since this can be done with a "pop" and a "push" with the same data, it is not essential. An underflow condition can occur in the "stack top" operation if the stack is empty, the same as "pop". Often implementations have a function which just returns whether the stack is empty.

Software stacksImplementationIn most high level languages, a stack can be easily implemented either through an array or a linked list. What identifies the data structure as a stack in either case is not the implementation but the interface: the user is only allowed to pop or push items onto the array or linked list, with few other helper operati