The Linux Foundation Projects
Skip to main content

By Rachel Roumeliotis, Director, Open Source Strategy at Intel

OPEA 1.2 brings some major updates, welcome enhancements, and cloud accessibility to the project. The focus of the OPEA engineers has been on reducing redundancy, improving code quality, and continuing to welcome new contributors. So let’s get to it! 

We now have integration with LlamaIndex and LangChain to enable OPEA as a backend extending its use further into the current GenAI ecosystem. OPEA is also now available on the AWS marketplace as part of our goal to reach developers where they are actually doing their work. Key contributions with this release include Opensearch integration via an AWS contribution, more OPEA GenAI Examples supported by AMD® and Intel® Gaudi® 3 contributed by those two companies, respectively, and Infosys making two key contributions including Azure automated deployment for OPEA applications and Elasticsearch vector database integration.

OPEA 1.2 Release Details

OPEA 1.2 brings six newly enhanced end-to-end GenAI examples that include improvements to:

We have some fantastic new GenAI components in store for you in the OPEA 1.2 release:

Additionally, Infosys contributed Azure automated deployment for OPEA applications which is instrumental in our goal of easy of deployment via different cloud service providers. And finally, we have new model compatibility with bge-base-zh-v1.5, Falcon2-40B/11B, Falcon3 – trying to keep up with the fast pace in this space is a challenge!

Take a look at the official release notes for all the details.

OPEA 1.3 to come in Spring 2025!

Want to stay up to date on OPEA? Join our mailing list by visiting OPEA.dev

LF AI & Data Resources

Access other resources on LF AI & Data’s GitHub or Wiki