User:Idkk/Data landfill

From Wikipedia, the free encyclopedia

Data Landfill[edit]

Computer Landfill

Data Mining is the extraction of information by analysis from large quantities of (often disparate) data. If the original data cannot be usefully analysed because of its size or complexity or lack of organisation (or any other reason) then we have the opposite, which may be referred to as Data Landfill[1]. Right from the outset, there has to be governance in the storing of data, and the planning of that storage, to avoid losing useful access to the information locked into that data.[2]

Data may be difficult to analyse because of its size[3][4]. This is less and less of a worry, as CPU speeds appear to be growing faster than the amount of data we have to process[5]. By Westheimer's Law[5] we can estimate that the total quantity of data in the world doubles every two years, which is slower than the raw growth in CPU speeds.

Data Complexity and CPU Speed
Raw CPU 58.5%, Effective CPU 55%

Data may be difficult to analyse because of its complexity[6]. There is no fixed definition of "Complexity" in this context, and we may be looking at Effective Measure Complexity, or Computational Complexity, or Algorithmic Information Complexity, or Shannon Entropy[7], or Kolmogorov Complexity, or Crutchfield's "Topological Complexity"[8][9], or Time Complexity of some useful analysis, or some other measures of indexing difficulties or sorting speeds. A review of some aspects of Data Complexity Measures was given by Sotoca, Sanchez and Mollinda in 2005[10]. If, as a reasonable first estimate, we take Data Complexity to be related to the sortable interconnections between items of elementary data, then we can say that complexity grows at the rate O(n log n), where n is the number of elementary data items. Raw computing power may well grow according to a consequence Moore's Law, but useful computing power grows much slower than that (e.g. according to Wirth's Law). This has a big effect upon whether any given set of data can ever be analysed. Moore's Law can be paraphrased by saying that CPU speeds grow at about 58.5% per year: that means that CPU speeds catch up with complexity after about ten years. If, however, we say that useful CPU speeds grow at 53% per year (just five and a half percent less - a very optimistic view of software bloat) then useful CPU catches up with complexity only after over sixty years.

Data may be difficult to analyse because of its lack of organisation. Organisation does not just mean the arrangement of the physical records of a data set[11], but includes a coherent definition of what the various parts of the data actually mean[12], of the quality of the recording and precision of the information, and of the interrelations that should or could exist between different items[13]. If the data is just dumped into storage without consideration - and adherence - to these, then it rapidly becomes useless - it becomes Data Landfill.

Recovery from Landfill[edit]

Outside of data processing we know that there can be useful extraction from waste and Landfill, although at some expense - usually of human effort[14], and at the cost of human health[15][16][17]. Similarly, with computing effort, real information can be extracted from data storage which was previously of little use. This information extraction can be as a result of being able to process data faster, or through being able to reorganise large quantities of data in new ways (as, for example, as directed graphs[18][19][20] rather than as relational tables), and use alternative searching techniques to the well-tried and well known SQL methods.

References[edit]

  1. ^ http://www.rkndavis.com/business-technology/data-warehouse-or-data-landfill
  2. ^ http://www.ibmbigdatahub.com/blog/governance-avoid-data-landfill
  3. ^ http://www.zdnet.com/blog/service-oriented/size-of-the-data-universe-1-2-zettabytes-and-growing-fast/4750
  4. ^ https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia
  5. ^ a b Kelly, Ian D. K (2013) "Little Data, Big Data, Very Big Data", Seismic Profile, Issue 3, Spring 2013, pp.34-35. See also http://www.seismicprofile.com/
  6. ^ http://web.mit.edu/esd.83/www/notebook/Complexity.PDF
  7. ^ http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Entropy_%28information_theory%29.html
  8. ^ http://www.santafe.edu/~jpc/JPCPapers.htm
  9. ^ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.6869&rep=rep1&type=pdf
  10. ^ http://www.lsi.us.es/redmidas/CEDI/papers/407.pdf This paper itself includes relevant references, q.v.
  11. ^ http://itlaw.wikia.com/wiki/Data_organization
  12. ^ http://rcg.montana.edu/data-management-plannin/data-organization-and-metadata
  13. ^ http://www.methods.manchester.ac.uk/events/whatis/datalinkage/elliot.pdf
  14. ^ http://www.huffingtonpost.com/2013/11/08/chile-health-alert-trash_n_4241693.html
  15. ^ http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1637771/
  16. ^ http://http://www.aljazeera.com/indepth/inpictures/2013/10/life-cambodian-rubbish-dump-20131018112429578824.html
  17. ^ http://http://cdn.environment-agency.gov.uk/str-p271-e-e.pdf
  18. ^ http://www.neo4j.org/‎
  19. ^ http://readwrite.com/2011/04/20/5-graph-databases-to-consider
  20. ^ http://research.microsoft.com/en-us/projects/trinity/