Draft:Huge Data

From Wikipedia, the free encyclopedia
  • Comment: Independent reliable sources needed to establish notablity. Grabup (talk) 02:41, 8 May 2024 (UTC)

Huge Data

Huge Data is a novel new technology that allows access to very large amounts of data; a working example of this is quickly accessing a puzzle pattern (in less than 1/100 of a second) from a data set that has 32,009,658,644,406,818,986,777,955,348,250,624 different puzzle patterns. The data set is greater than 32 Decillion (32 X 10^33) that represents more data than what is on all Hard Disk on Earth and that includes every datacenter. This Technology uses a Patent Pending Puzzle 63/470,384 similar to the Rubik's Cube and a Trade Secret Algorithm and Dataset Technology that can retrieve any of the selected puzzle based on the minimum amounts of moves to solve it; the number of puzzles in each number of moves is known and selectable from the spectrum of moves. See how Qubik at https://qubik.ai is using this technology to Train, Benchmark, and Evaluate the Reliability of AI (Artificial Intelligence) with perfect unbiased data ideal for machine learning and benchmarking.

References[edit]

https://qubik.ai