开发者

How to store datapoints in the magnitudes of 1+ trillion?

开发者 https://www.devze.com 2022-12-07 21:21 出处:网络
So, I have astronomical spectroscopy data in the following format: { "molecule": "CO2",

So, I have astronomical spectroscopy data in the following format:

{
        "molecule": "CO2",
        "blahblah": 
               
         "5 more simple fields"
        "arrayofvalues": [lengths can go up to 2 million]
}

of this data, I have 600,000 files, so that means that there are 1 trillion individual datapoints that I want to search through, and do computations with.

So can someone please direct me to a source of maybe bigData or开发者_开发技巧 bigQueries on how I can efficiently lookup this data for computations and graphing? I want to like for example search certain molecules, under certain condition, what data they show etc.

I want to make a website where people can pick some variables, and a value range, and get graphical or textual data.

Now I tried to put some of this stuff on PostgresQL, but obviously when I do a get request, (and store even just 5 files) it will crash Postman, because its too much data.


Without knowing more details, you can take advantage of the data modeling options available in bigquery, such as:

  • nested data
  • arrays and structs
  • partitioned tables
  • clustering

Take a look at the data types: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types

And also the partitioning and clustering techniques.

https://towardsdatascience.com/how-to-use-partitions-and-clusters-in-bigquery-using-sql-ccf84c89dd65?gi=cd1bc7f704cc

0

精彩评论

暂无评论...
验证码 换一张
取 消