In today’s digital era, artificial intelligence (“AI”) has emerged as a revolutionary technology that is quickly reforming various industries. Among the most significant advancements within AI are generative AIs and large language models (“LLMs”). These sophisticated systems, powered by machine learning (“ML”) algorithms, have demonstrated unparalleled capabilities in generating human-like text, understanding complex language structures, and aiding decision-making processes.
Since 2019, CalypsoAI has been an early innovator in the security realm for AI models and applications, underscoring the Company’s domain expertise and thought leadership in the rapidly scaling AI security market. CalypsoAI’s existing product, VESPR Validate was launched in 2020 and is used to independently test and validate how AI/ML models perform in various conditions. Led by CEO Neil Serebryany, the CalypsoAI team has been actively developing and deploying its second product in Moderator, which is the first and most robust AI risk management product available today for enterprises that empowers organizations to leverage the immense potential of LLMs responsibly.
The Market and Opportunity
Globally, enterprises recognize the transformative potential of generative AIs and LLMs, understanding that these revolutionary technologies can unlock massive efficiency and productivity gains for their business and workforce. According to a recent report by McKinsey, it is estimated that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion to the global economy. From improving customer experiences and enhancing operational efficiency to enabling data-driven insights and streamlining workflows, these cutting-edge technologies are reshaping how businesses operate, compete, and innovate in the market.
However, as the rapid speed of LLMs adoption has taken many enterprises by storm, companies have expressed security and data quality concerns over the implementation of LLMs in their systems. Specifically, many enterprises fear sensitive data leakage via LLMs and want to ensure that generative AI data outputs are highly accurate and precise before utilization. For an organization, it is becoming mission critical to create and add guardrails to prevent data loss and enable data traceability across an enterprise’s LLMs.
Several years ago, our team observed these trends and formulated an investment thesis centered on the idea that for organizations to fully harness the potential of AI and reap its productivity benefits, it must be underpinned by security, trustworthiness, and auditability. We likened the need for AI security to the importance of safeguarding traditional web applications. To operationalize AI within an enterprise, we envisioned the emergence of a new value chain comprising innovative solutions and tools designed to test, validate, and monitor diverse AI models, which would all ensure safe integration and operation within organizations.
Fast forward to the present, we are witnessing a situation where companies are imposing bans on the utilization of natural language processing (“NLP”) chatbots, like ChatGPT, due to concerns about trust and control within their organizational settings. This very scenario serves as a compelling example of why the development of novel security solutions is imperative. We believe that by enabling a safe and responsible application of AI, various AI models can propel significant innovation and value-creation within enterprises.
The growing security and safety concerns and a similar thesis sparked the creation of CalypsoAI’s Moderator solution. Understanding the prevalent need for organizations to use LLMs safely and confidently to avoid missing out on its productivity gains, CalypsoAI’s Moderator solution helps commercial enterprises manage LLM risks through a monitoring engine, implemented as a gateway, which can measure, flag, and block inputs and outputs for LLM models of all types.
Three years ago, Paladin initially invested in CalypsoAI with the thesis that the adoption of AI would proliferate through enterprises and government agencies, making AI risk management and security tools essential. Recently, CalypsoAI announced that it has raised $23 million in a Series A-1 financing, which Paladin was excited to lead. In the round, we partnered with a strong syndicate of strategic investors, including Lockheed Martin Ventures. Looking forward, as generative AI and LLMs continue to evolve and mature, their impact on enterprises and the need for security and risk management solutions will only intensify. As such, we believe that the CalypsoAI team is in a strong position to continue to build the next generation of products to secure and manage LLMs in organizations. We are excited to continue to support CalypsoAI as it brings its novel security solutions to this fast-expanding sector in which enterprise adoption is not set to slow down.