Be first. Be fast.

Whatever you build, do it right and do it well with Leonata.

Introducing Leonata: The Next Evolution in AI - Lightweight, Secure, and Lightening-Fast

Today, the AI landscape welcomes a game-changing innovation: Leonata. Unlike traditional large language models (LLMs), Leonata is not just another text generator; she's a paradigm shift. With her unparalleled capabilities, Leonata redefines what's possible in natural language processing (NLP).

"At Leximancer Technologies, we've shattered the status quo with our latest creation: Leonata. Unlike LLMs, Leonata doesn't hallucinate or require extensive computational resources. She's compact, secure, and lightning-fast," explains Dr Andrew Smith, Chief Scientist at Leximancer Technologies.

Features That Set Cassandra Apart:

  • Small Compute Needs: Leonata’s efficiency is unmatched. Despite her incredible power, she demands minimal computational resources, making her ideal for a wide range of applications.

  • Air-gapped Compatibility: Concerned about security? With Leonata, you can breathe easy. Her unique design allows for air-gapped operation, ensuring data integrity and confidentiality.

  • Speed: Need answers in seconds, not minutes? Cassandra delivers. Her lightning-fast processing capabilities set a new standard for AI performance.

"Cassandra isn't just a product; she's a game-changer. Whether you're in finance, healthcare, or any other industry, Cassandra empowers you to unlock new possibilities in data analysis and decision-making," adds Dr Smith.

With Cassandra, organizations can harness the power of AI without compromising on security, speed, or efficiency. Experience the future of NLP with Cassandra.

What is it for?

App architecture

Placeholder

Document interrogation

Placeholder

Knowledge Graphs

Placeholder

Launching an AI disrupter

  • Key Advantages of Cassandra:

    1. Eliminates three critical steps: document splitting (chunking), embedding, and vector database processing

    2. Reduces time, cost, and computational resources for generating answers

    3. Enhances factual accuracy through cross-referencing multiple sources

    4. Provides superior contextual understanding of user queries

  • 1. User uploads a document

    2. Cassandra Command Line creates a knowledge graph of the document content

    3. User asks a question

    4. Cassandra Command Line retrieves relevant document chunks

    5. Large Language Model (LLM) generates an answer using the question and relevant chunks

  • Cassandra offers significant benefits over traditional RAG systems:

    - Reduced time to answer/upload

    - Lower costs (particularly in embedding API and Vector DB running costs)

    - Decreased compute requirements, operates offline

Tech Head Says

Dr. Andrew Smith, Chief Scientist and Founder at Leximancer, emphasizes Leonata’s ability to provide necessary context by analyzing queries in relation to user profiles, previous interactions, and current context. This enables LLMs to generate specific, nuanced text tailored to user needs, with zero hallucination. 

Testimonial

“With the reduced token load, efficiency and security this product is a game changer for our business, building applications for the Australian Defence Force” 

-Daryl Batchelor, Senior Software Engineer

FAQs

How does it actually save me time?

IThe problem is that LLMs rely on having extremely large volumes of source material in order to generate meaningful text. For many use cases the model training data is sufficient and represented by default in the model weights used for inference.

For many more use cases specific context to the task at hand is not present in the training data and it therefore can't be used. For example if one was using an LLM to give feedback on a book they were writing the LLM has no context for what that person has written.

There's a subset of problems where there's an easy solution involving just providing the context of what you want from the LLM - easy example would be re-wording an email, it's trivial to just provide the existing email to the LLM. However there is another subset of problems with two potentially overlapping issues, and this is ultimately the source of the problems software engineers face on daily basis.

Those issues are either 1) the source/context an LLM needs is buried in some kind of data store eg perhaps 100 different files on my computer and/or 2) the source is too large for an LLM to handle and it needs to be pruned/curated prior to providing it to the LLM.

What does this mean? Software Engineers can't utilise the, frankly absurd, productivity enhancing features of an LLM for tasks  with complicated and/or lengthy context without significant effort into finding and selecting the correct subset of context to provide the LLM for a specific task.

The effort, cognitive load, and time taken of doing that increases exponentially with the size of the task. At some point it becomes such a burden than reverting to not using an LLM for assistance becomes the best option because it's simply too hard to curate the context required to get results that will actually be helpful.

This is extremely frustrating and disappointing because when a task's scale allows for easy context management for the LLM, it's easy to see 10x boosts in productivity for some things. Then for very complex tasks using an LLM ends up actually reducing productivity because it's just too hard to curate the context.

There is a focus above on writing/research-like tasks, mostly because it's easy to picture those in examples. This problem applies equally to more technical automation workflows that involve LLMs. In those cases where an LLM is used a decision making component providing the correct context is crucial and in an automated environment generally becomes less reliable as the number of options increases.

What tasks would I use it for?

  • Build a Knowledge Graph or Semantic model of your private documents

  • Simple research and text preparation tasks

  • Or more technical tasks,

  • Deploy as an agent

  • A piece in a architecture pipeline

  • Decision making component

  • LLM super prompter - context enhancer

How can I make curating the context easier?

The preparation of files for an LLM becomes a lengthy, time consuming task.

When you use Leonata curating the context is easier in order to increase the scope in which the productivity enhancement of LLM assistance is attainable; and another option is to make the automated selection of context more reliable for LLM enhanced AI decision making.

What really is a Large Language model?

Large Language Models can also be known as translation tools, they essentially pattern match by….

They are adept at telling you what you want to hear but fall down at making decisions or using critical thinking.

They are best suited to tasks that require generating content for non-critical, no important tasks where the reward is high and the risk is low.

Be first. Be fast.

“It all begins with an idea. Maybe you want to launch a business. Maybe you want to turn a hobby into something more. Or maybe you have a creative project to share with the world. Whatever it is, the way you tell your story online can make all the difference.”

— Squarespace