Skip to content

The Problem

For scalable AI in a global business context, there is the need for standardized centralized data and that is not available today. It's super hard to curate, standardize, and aggregate data. There are trillions of new data points every day and the cost of ignoring it is too high because the world is changing even faster. As most companies are not set up to ingest and synthesize disparate data sources, our work is founded to help people and organizations manage local and global uncertainty in a more proactive manner.

Some of the foundational challenges facing data and analytical community are:

  • The variable volume of data
  • Different update frequencies
  • Inconsistent data fields
  • Standardization and aggregation of data
  • Serving and Cataloging

Data mesh, at the core, is founded in decentralization and distribution of responsibility to people who are closest to the data to support continuous change and scalability. The approach is very scalable and also helps in the generation and movement of data across the organization much smoothly. The data products hold data, and the application domains that consume lake data, are interconnected to form the data mesh.

Data Product consists of different layers for collection, processing, and serving of data. A catalog is maintained to understand the data to be fetched by the API and also see the knowledge graph associated with the info across other data products