Ashutosh Syngal's Decentralized Vision for Secure Data in AI

Artificial intelligence has an insatiable appetite for data – the more diverse and abundant the data, the smarter the algorithms. Yet this hunger clashes with growing concerns over privacy. Training cutting-edge AI models often requires aggregating troves of personal information on centralized servers, a practice that can jeopardize user privacy and violate data protection laws. By 2025, global data creation is projected to exceed 180 zettabytes, and organizations are looking for ways to utilize this data goldmine without trampling individual rights. This is the crux of the privacy paradox: AI needs data to thrive, but people and regulators are increasingly unwilling to surrender their privacy.

Enter Ashutosh Synghal, Vice President of Engineering at Midcentury Labs, who is at the forefront of a movement to solve this dilemma. Synghal is pioneering a blockchain-powered solution to enable AI development in a privacy-preserving, decentralized way. Decentralized “confidential AI” – blending blockchain with secure hardware – is emerging as a viable path to let users own and monetize their data while still contributing to AI breakthroughs. In essence, Synghal’s work allows data to be used for machine learning without pooling it in one vulnerable location, redefining how AI training and security can co-exist.
A Blockchain-Powered Data Platform for AI
Midcentury Labs’ flagship project is a decentralized data platform that connects data providers (everyday people or organizations with valuable information) and data consumers (AI developers and companies) on equal footing. Unlike traditional data brokers, this is built on a blockchain network, leveraging smart contracts for scalability. Every time a developer needs a dataset to train an AI model, they can request it. Users who opt in can then permission their data for that task, and the blockchain ledger records the transaction transparently. This approach shifts data ownership and control back to the individuals – a stark contrast to the status quo of tech giants hoarding user data. In fact, the platform’s mission is to “shift value and control from centralized entities to a network where individuals are rewarded for fueling AI’s growth”.
Crucially, Midcentury ensures that data exchanges are not only transparent but also trustless – meaning participants don’t have to blindly trust a central authority. Smart contracts automatically enforce permissions and payment terms. For example, if a healthcare AI company wants to train a model on patient data, a smart contract might stipulate that a certain number of anonymized health records are needed at a set price, and only for a defined use. Once the terms are agreed upon, the contract executes: the model is trained within a secure environment (more on that next), and the patients’ data wallets are paid—perhaps in cryptocurrency or digital tokens—for their contribution. All of this happens without a middleman, and with an immutable audit trail showing who accessed what data and when.
zkTLS and TEEs: Under the Hood of Privacy-Preserving AI
Making this kind of privacy-first data exchange possible requires heavy-duty cryptographic engineering. Synghal’s team employs a combination of techniques to ensure that even as data is being used by AI algorithms, it remains shielded from prying eyes. One cornerstone is zkTLS, short for Zero-Knowledge Transport Layer Security. This protocol merges standard internet encryption (TLS) with zero-knowledge proofs, producing cryptographic evidence that a data transaction or computation is valid – all without exposing the underlying data. In practice, zkTLS can prove to a third party (or to a smart contract) that an AI model was trained on certain inputs or that a dataset meets specific criteria, without ever revealing the inputs themselves. It’s like confirming a secret is true without disclosing the secret.
Another key technology is the use of Trusted Execution Environments (TEEs) – secure enclaves in modern processors that isolate code and data from the rest of the system. Midcentury leverages TEEs to create a safe sandbox where AI model training occurs on sensitive data. When a dataset is provided through the platform, it is loaded into a TEE on a distributed network of nodes. Inside that enclave, the AI training code runs on the data, but even the node’s operator or any external observer cannot see the raw data or intermediate results. Only the final model parameters or agreed-upon insights leave the enclave – and even those can be verified via zero-knowledge proofs to ensure nothing private leaked. This approach, often called confidential computing, lets algorithms glean knowledge from data without exposing it – effectively solving the problem of how to learn from information you’re not allowed to look at directly.
Synghal’s engineering blueprint doesn’t stop there. The platform also integrates Secure Multi-Party Computation (SMPC) protocols and other advanced cryptographic tools to further bolster privacy. In some cases, data can be split among multiple parties and jointly computed on, so that no single party ever holds all the raw inputs. By combining zero-knowledge proofs, SMPC, and TEEs, Midcentury is building a multi-layered defense for user data during AI processing. This holistic approach ensures end-to-end security: data is protected at rest (encrypted and stored off-chain), in transit (via zkTLS), and in use (inside TEEs).
Redefining AI Training and Security
The implications of Synghal’s decentralized, privacy-first framework are far-reaching. For AI developers, it opens doors to vast new data sources that were previously off-limits due to privacy concerns or regulations. Industries like healthcare and finance, which deal with highly sensitive information, stand to benefit immensely. Under traditional setups, a hospital or bank might refuse to share data with an AI startup due to liability and compliance issues. But through a privacy-preserving platform, they could contribute anonymized, secure data to train algorithms for disease detection or fraud prevention, with cryptographic guarantees that no patient or customer privacy will be violated. Indeed, the AI models built on Midcentury’s platform are especially suited for domains that demand both innovation and strict privacy controls.
For end users and data owners, this model offers an unprecedented level of control and potential reward. Instead of being passive subjects whose data is siphoned off behind the scenes, individuals become active stakeholders in the AI ecosystem. A person could, for example, allow their wearable fitness data or social media activity to be used by an AI research project and in return earn digital tokens or fees for that contribution. They also gain assurance that their data can’t be misused or deanonymized thanks to the platform’s technical safeguards. This flips the current paradigm on its head: rather than privacy being an afterthought, it’s baked into the AI development process from the start.
Security is another major win. Decentralizing the data and computations makes it harder for hackers to find a single “jackpot” target. In a centralized setup, one breach can leak millions of records. But with Midcentury’s distributed approach, there is no central honeypot – data remains fragmented and encrypted, and any attempt to tamper with the training process would be evident on the blockchain ledger or thwarted by the TEE’s protections. In essence, Synghal’s architecture minimizes the attack surface for bad actors and builds resilience against data leaks by design.

Synghal’s Role and the Road Ahead
Ashutosh Synghal isn’t just the chief architect behind these innovations; he’s also a leading voice advocating for privacy-first AI in the broader tech community. With a background that spans Stanford’s computer science program and engineering roles at Amazon’s AI-driven retail systems, Synghal brings both academic and practical expertise to the table. He has written about the importance of data marketplaces in the future of AI, emphasizing how technologies like blockchain, zero-knowledge proofs, and secure multi-party computation can enable ethical data sharing at scale. Backed by prominent investors (including Andreessen Horowitz and crypto-focused fund Delphi Ventures) who have poured significant seed funding into Midcentury Labs, Synghal is leading by example in an emerging field that sits at the intersection of AI, blockchain, and data ethics.
The industry is taking notice. A growing number of projects and protocols are exploring similar routes – from major tech firms implementing federated learning and differential privacy, to Web3 startups like iExec that combine blockchain with confidential computing to let users monetize data with privacy protections built-in. Synghal’s Midcentury Labs is among those at the vanguard, demonstrating a working model of how decentralization can resolve the AI data dilemma. As global trends push for data sovereignty – with regulations like GDPR and consumer sentiment demanding greater privacy – solutions that enable privacy-first AI are poised to become the new standard rather than the exception.

“With decentralized AI, we are building a future where AI can thrive without compromising individual rights… This is just the beginning of a global shift toward ethical, privacy-first technology,” Synghal says, expressing confidence that decentralized privacy solutions will only gain momentum in the coming years. If Midcentury’s blockchain-based data platform is any indication, the gap between guarding privacy and advancing AI might finally be closing – one secured dataset at a time.