Kamu Update: We join CDL and CODWG


Kamu is selected for CDL program #︎

We are very happy to announce that Kamu has been accepted into Creative Destruction Lab - a program for early stage science and technology companies. We will be joining the new Compute Stream in Vancouver that focuses on “technologies that will impact humankind in the same ways the printing press, the personal computer, and the internet did”. This is quite an expectation to live up to, and it definitely reflects our mission - to bring humanity onto the next level of data-driven decision making and bootstrap the new era of digital economy based on fair and rapid data exchange.

We join Protocol Labs “Compute Over Data” Working Group #︎

Following our graduation from Faber-Filecoin Web3 accelerator, Protocol Labs - the company behind such amazing projects like IPFS and Filecon - invited us to join their newly-formed Compute Over Data Working Group. We are proud to work alongside many amazing companies to collectively address the problem of decentralized data processing.

Companies in the group tackle a wide variety of problems like:

  • Sandboxed computation environments that can run co-located with data (in WASM VMs or contaienrs)
  • Verifiable computations and identifying malicious actors
  • Decentralized ownership and authorization
  • Privacy-preserving computations
  • Web3-native databases
  • General-purpose compute networks (think decentralized alternatives to AWS)

With so much activity in this space we see a clear niche where Kamu can bring the most value:

  • Structured data processing - WASM and general-purpose compute is great, but data processing requires much more robust and higher-level primitives. We need interoperable data and schema formats, data-centric processing languages like SQL that go beyond a single dataset (e.g. in case of JOINs)
  • Dynamic data - in addition to one-shot processing tasks, how do we represent dynamic data sources (e.g. IoT devices, medical records) in decentralized and content-addressable storages and how do we build pipelines that can continuously process them
  • Bridging Web3 with an existing ecosystems of enterprise, government, and research data - for Web3 data to become mainstream where need to provide a smooth transition path for existing organizations through the use of standard analytical data formats, languages, and APIs and allowing them to pick the comfortable level of decentralization.

Here you can find our introduction and the technology demo we presented to the group.

Technology updates #︎

Our progress on technology in the past two month incudes:

  • Support for publishing and syncing data from IPFS - our first integration with content-addressable file system went very smooth as we originally designed our protocol around this
  • New chapter in our self-serve demo showcases using Kamu for Web3 data analytics - follow it to build a complex pipeline that combines data from Ethereum blockchain and Web2 data sources
  • Updates to core protocol’s block structure that allowed us to significantly improve sync efficiency
  • We’ve put a major effort into our web frontend’s internals - it’s mostly non user facing, but sets us up for rapid feature development to simplify the user experience.