[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

We help clients realize the full potential of
computational knowledge and intelligence

From the creators of Mathematica and Wolfram|Alpha

Customers trust Wolfram’s deep experience in computational innovation to solve their unique challenges, worldwide

Optimizing Your Data for Maximum LLM Reliability

Insights (8)

Whether you’ve noticed or not, artificial intelligence is becoming integral in modern systems, either front and center or behind the scenes. And this is just the beginning. Thanks to recent advances in large language models (LLMs), examples range from customer service chatbots to health-care data analysis and nuanced writing advice. This era has arrived with conversational interfaces, the processing of unstructured data, code synthesis and simple cognitive tasks.

Beneath this veneer of sophistication, however, lies a critical reality: LLMs are not a panacea for all computing challenges, especially given their tendency to produce results that are plausible without necessarily being accurate.

(That is, as Carnegie Mellon University professor Jignesh Patel put it: “Generative AI exceeded our expectations until we needed it to be dependable, not just amusing.”)

And if you need LLMs to make use of your enterprise data, models or algorithms, this is a very big issue.

Fine-Tuning Limitations

Fine-tuning LLMs was initially seen as the solution to inaccurate answers and hallucinations because it allowed models to be adapted specifically to particular domains or tasks. By exposing the LLM to a curated set of domain-specific data, the model could learn the nuances and specialized knowledge required to generate more accurate and contextually relevant responses. This process promised a significant reduction in errors and improved performance in niche applications, making it an appealing approach for early adopters.

Fine-tuning LLMs to increase accuracy and reduce hallucinations ultimately revealed a significant limitation: fine-tuned models tend to become rigid and less adaptable, struggling to incorporate new information or contexts without additional retraining, which is impractical in rapidly evolving fields. This challenge highlights the need for more flexible and scalable approaches to improving LLM performance.

The Retrieval-Interleaved Generation (RIG) Paradigm to the Rescue

Fortunately, the retrieval-interleaved generation (RIG) paradigm addresses many of these limitations. Instead of relying solely on the static knowledge embedded in the model (that is, from its training data), the LLM is connected to external sources such as databases, knowledge systems or even the web. When it encounters a query that requires current or domain-specific information, the model retrieves relevant data dynamically and incorporates it into its generated responses.

This was the reason Wolfram was invited to be among the first plugins to ChatGPT (something that has since evolved into Wolfram GPT). That plugin used Wolfram|Alpha as a source of data and the Wolfram Cloud as an engine for executing Wolfram Language code that the LLM might synthesize.

ChatGPT Gets Its Wolfram Superpowers!

This becomes relevant for most enterprise applications accessing private, proprietary data, whether that is billing and shipping data for a customer services chatbot; production, stock and orders data for a manufacturing control tool; or scientific and engineering models for a research assistant.

Of course, within Wolfram Language, we have a well-established pipeline of technology that makes it trivial to connect to various data sources and add computational tools to an LLM that can do all of that, so for simple projects, this is already a solved problem.


The Scaling Challenge

Unfortunately, while the approach of “take an LLM, add some prompt engineering and add some tools” can quickly make great applications for narrow purposes, it can start to break down as you broaden the aspirations for your tool. The problem? As you add more endpoints to each of your different databases or for multiple models and digital twins, this level of complexity can overwhelm your LLM and cause it to be confused.

For example, a financial analysis tool leverages multiple databases for various data types, such as stock market data, economic indicators and company financials. When a user asks for insights on how a recent economic indicator change might affect the stock market, the LLM needs to fetch data from both the economic indicators database and historical stock market data to analyze correlations. The LLM might, however, mistakenly call the stock market database for economic indicators or vice versa or send incorrect arguments like date ranges or specific indicators to each endpoint. This can result in the tool providing inaccurate or incomplete information, frustrating the user and diminishing the tool’s reliability.

The problem is twofold. First, the LLM starts to get confused about which endpoint to call for which piece of information and the arguments to send to the endpoint. But more profoundly, when you ask queries that cross different silos—say joining data or passing retrieved data into a model to produce a prediction—it gets confused about what things really mean. This is as much a feature of the ambiguity of human language, which is the LLM world, as a problem with the LLM. (It is, after all, why math and other forms of symbolic representation and processing were invented.)


The Computable Knowledge Layer

One solution to the scaling challenge is to produce an all-encompassing endpoint that is a single source of computational knowledge and data and where all these issues of symbolic meaning, source identification, formal representation and processing are taken care of. You then provide a single, flexible interface that the LLM can send its knowledge queries to.

Sure, the LLM still has to call this endpoint correctly, but Wolfram has already mastered this type of challenge thanks—once again—to our earlier work with Wolfram|Alpha. It’s a knowledge engine designed to be a single source of computable data—albeit, originally for direct human access—from private knowledge sources and ontologies. Furthermore, it also has a natural language interface that, while far less fluent than modern LLM approaches, is nevertheless sufficiently forgiving and broad for the LLM to communicate with natural language, which it naturally does, without having to try and teach it to use formal API codes.


Making Your Data Computable

So what is involved in getting data ready for LLM access? At a small scale, nothing. If you have relatively narrow goal and clean data sources, you can deal with the challenges through a combination of endpoint design and prompt engineering. Indeed, we are engaged in several “add an LLM to my data” type projects from database or document sources built directly with combinations of Wolfram Language LLM-related functionality, Wolfram Chat Notebooks and deployment technologies like Wolfram Enterprise Private Cloud.

But while you are getting these “easy wins” in place, you should start considering preparing your data for the more ambitious “make my entire enterprise knowledge accessible to AI” type projects that will soon become one of the decisive competitive advantages for many organizations. This requires moving all your data toward level 10 on Wolfram’s computable data scale.

Wolfram Scale of Data Computability

The central idea for achieving the higher levels is to build a symbolic representational layer on the meaning and relationships of the data. That doesn’t require an upheaval in the data capture and data storage infrastructure but is about adding a layer that ensures that when you retrieve a value from a your data, you know what it means, how it relates to other values and what models, calculations or visualizations can consume it—and in a fully automated way.

Take a simple example: if you extract a 2 and a 3 from a database, can you do the operation “2 + 3”? If so, what does it mean? Well, if they represent inches and meters, we could, but the answer would not be 5. If they represent product IDs, the operation probably isn’t valid. But perhaps if they were IDs of investment portfolios, adding them together might be chosen to represent the combined portfolio. Doing this systematically so that high-fidelity digital twins or predictive models can consume data is what unlocks the open-ended, ad-hoc queries that an LLM could request.

In most organizations, that knowledge is patched with humans—librarians, business intelligence (BI) teams, analysts and others with similar roles. Not only is that expensive, it is also slow and the reason why most organizations only have near-real-time access to mission-critical data. And data deemed “less critical”? It will likely wind up languishing in a queue waiting for analysts’ attention.


Use Wolfram to Connect Your Dots

Smart business decisions come from making connections between disparate datasets. Take a retail company looking to streamline its supply chain: they’re not just looking at sales numbers. They’re diving into customer feedback, inventory levels and market trends. This holistic view uncovers patterns and forecasts demand with increased precision. And LLMs have the potential to crunch mountains of data to find insights your people could miss. But here’s the kicker: the advantages of LLMs can easily be limited by bad or messy data. If you feed them curated, high-quality data, they’ll give you recommendations that are spot-on. But if not? Bad analysis is worse than no analysis at all.

Your solution is Wolfram technology and our data curation team, which has a decade of experience in creating computable representations of enterprise data. We’re ready to help you on the journey toward enterprise AI.

Contact Wolfram Consulting Group to learn more about using Wolfram’s tech stack and LLM tools to generate actionable business intelligence.

Read more

Beyond the Hype: Providing Computational Superpowers for Enterprise AI

Insights (8)

Sure, it was laughable when X’s AI chatbot Grok accused NBA star Klay Thompson of a vandalism spree after users described him as “shooting bricks” during a recent game, but it was no joke when iTutorGroup paid $365,000 to job applicants rejected by its AI in a first-of-its-kind bias case. On a larger scale, multiple healthcare companies—including UnitedHealth Group, Cigna Healthcare and Humana—face class-action lawsuits based on their AI algorithms that are alleged to have improperly denied hundreds of thousand of patient claims.

So, while AI—driven by large language models (LLMs)—has emerged as a groundbreaking innovation for streamlining workflows, its current limitations are becoming more apparent, including inaccurate responses and weaknesses in logical and mathematical reasoning.

To address these challenges, Wolfram Research has developed a suite of tools and technologies to enhance the capabilities of LLMs. Wolfram’s technology stack, including the Wolfram Enterprise Private Cloud (EPC) and Wolfram|Alpha, increases the productivity of AI applications in multiple enterprise environments. By leveraging Wolfram’s extensive experience in computational intelligence and data curation, organizations can overcome LLM limitations to achieve greater accuracy and efficiency in AI-driven workflows.

At the same time, Wolfram Consulting Group is not confined to one specific LLM. Instead, we can enhance the capabilities of any sophisticated LLM that utilizes tools and writes computer code, including OpenAI’s GPT-4 (where Wolfram GPT is now available), Anthropic’s Claude 3 and Google’s Gemini Pro. We can also incorporate these tools in a privately hosted LLM within your infrastructure or via public LLM services.


Wolfram’s Integrated Technology Stack

Wolfram has a well-developed tech stack available to modern LLMs: data science tools, machine learning algorithms and visualizations. It also allows the LLM to write code to access your various data sources and store intermediate results in cloud memory, without consuming LLM context-window bandwidth. The Wolfram Language evaluation engine provides correct and deterministic results in complex computational areas where an unassisted LLM would tend to hallucinate.

When your organization is equipped with the Wolfram technology stack for tool-assisted AIs, the productivity of your existing experts is enhanced with methods that support exploratory data analysis, machine learning, data science, instant reporting and more:

  • The LLM can interpret expert user instructions to generate Wolfram code and tool requests performing a wide variety of computational tasks, with instant feedback and expert verification of the intermediate results.
  • Custom tools for accessing corporate/proprietary structured and unstructured data, models and digital twins, and business logic feed problems to the Wolfram Language algorithms implementing your analytic workflows.
  • Working sessions create a documented workflow of thought processes, prompts, tool use and code that can be reused on future problems or reviewed for audit purposes.

Designed for system integration flexibility, use the platform as a fully integrated system or as a component in an existing one. In the full-system integration, the Wolfram tech stack seamlessly manages all communications between the LLM and other system components. Alternatively, use it as a set of callable tools integrated into your existing LLM stack as our modular and extensible design readily adapts to your changing needs. Also access the integrated Wolfram tech stack through a variety of user interfaces, including a traditional chat experience, a custom Wolfram Chat Notebook, REST APIs and other web-deployed custom user interfaces.


Wolfram Enterprise Private Cloud (EPC)

Wolfram Enterprise Private Cloud

Wolfram’s EPC serves as a private, centralized hub for accessing Wolfram’s collection of LLM tools and works in commercial cloud environments such as Microsoft Azure, Amazon Web Services (AWS) and Google Cloud. For organizations preferring in-house solutions, EPC can also operate on dedicated hardware within your data center.

Once deployed, EPC can connect to various structured and unstructured data sources. These include SQL databases, graph databases, vector databases and even expansive data lakes. Applications deployed on EPC are accessible via instant web service APIs or through web-deployed user interfaces, including Chat Notebooks. As Wolfram continues to innovate, the capabilities of EPC also grow.


Wolfram|Alpha Infrastructure

Wolfram|Alpha can also be a valuable asset for your suite of tools. With a vast database of curated data across diverse realms of human knowledge, Wolfram|Alpha can augment your existing resources.

Top-tier intelligent assistants, websites, knowledge-based apps and various partners have trusted Wolfram|Alpha APIs for over a decade. These APIs have answered billions of queries across hundreds of knowledge domains. Designed for use by LLMs, Wolfram|Alpha’s public LLM-specific API endpoint is tailored to enable smooth communication and data consumption.

If your LLM platform requires a customized version of Wolfram|Alpha, our sales and engineering teams will work with you to optimize your access to its extensive capabilities. This ensures that you have the right setup to harness the full potential of Wolfram|Alpha in your specific context.


Preparing Knowledge for Computation

While many platforms give an LLM access to data retrieval tools, what sets Wolfram apart is extensive experience in preparing knowledge for computation. For over a decade, Wolfram has provided knowledge curation services and custom versions of Wolfram|Alpha to diverse industries and government institutions with sophisticated data curation workflows and exposed ontologies and schemas to AI systems. Direct access to vast amounts of data alone is not enough; an LLM requires context for data and an understanding of the user’s intent.

Corporate cloud environment

Wolfram consultants can establish workflows and services to equip your team with tools for programmatic data curation through an LLM. This process involves creating a list of questions and identifying the subjects or entities to which these questions apply. The LLM, with the aid of the appropriate retrieval tools, then finds the answers and cites its sources. These workflows alleviate the workload of extensive curation tasks, and the enhanced curation capabilities then operate within the EPC infrastructure.

At the same time, you’ll retain ownership of any intellectual property created for your funded project, including custom plugins or tools Wolfram develops, ensuring you have full control over the solutions created for your organization.


Enterprise AI the Wolfram Way

When you decide you need a custom LLM solution, let Wolfram Consulting Group build one tailored to your specific needs. From developing runtime environments that help your teams integrate Wolfram technology into existing platforms to creating application architecture, preparing data for computation and performing modeling and digital twin implementation, Wolfram has the unique experience across all areas of computation for the right balance of approaches to achieve optimal results.

By working with Wolfram, you get the best people and the best tools to keep up with developments in the rapidly changing AI landscape. The result? You will capture the full potential of the new generation of LLMs.

Contact Wolfram Consulting Group to learn more about using Wolfram’s tech stack and LLM tools to generate actionable business intelligence.

Read more

Leveraging Curated Data for Strategic Decision Making

Insights (8)

Navigating today’s volatile business landscape without top-tier data is like trying to predict a hurricane with last month’s weather report. It’s not just reckless; it’s downright dangerous. Quality, up-to-date information is the Doppler radar for your business, helping you see through the unpredictable market conditions to make decisions that aren’t just reactive guesses but proactive strategies. After all, facts are as unyielding as the laws of nature: they don’t bend to our wishes or fears.

Read more

Preparing for a Future with Generative AI

Insights (8)

In an economic environment where costs are rising, businesses are searching for new ways to improve margins, ideally by increasing productivity while lowering costs at the same time. Generative AI is offering a quickly growing toolbox for enhancing efficiency and reducing operational expenses with relatively low targeted investments. For example, AI tools can be used to process large amounts of documents, images or video content as well as to automatically generate new content at high quality.

It is not difficult for organizations to develop a multitude of ideas of how to put generative AI to work—indeed, the potential seems almost unlimited. But developing a comprehensive AI strategy for a business is a big challenge at a time when foundational technologies appear to evolve on a weekly basis.

The generative AI ecosystem is moving at a breathtaking speed, with new players arriving daily and established players at risk of disappearing. Big, commercial large language models (LLMs) are leading the scoreboards, but smaller and open-source models, including those with commercially viable licenses, are catching up quickly. The cost structure of operating LLMs is currently dominated by a scarcity of specialized hardware for AI clusters, with delivery times of a year or more for large customers. Selecting the right set of tools from an avalanche of unproven and quickly changing open-source projects is another considerable challenge.

It seems hard to pick the right combination of tools, AI models and technology suppliers for long-term tech investments, especially for organizations (including large, established consulting firms and IT service providers) that lack the expertise to implement generative AI. So what is a safe approach to creating an AI strategy if you do not want to miss out on this exciting technology, while hedging your bets and minimize your risk?

Wolfram Consulting Group can help companies to navigate this quickly transforming landscape by beginning with carefully selected and sharply focused use cases, avoiding the pitfalls of premature and costly investments. By rapidly developing prototypes for the most promising application areas, clients can gain experience and build the expertise and confidence to develop a longer-term generative AI strategy in preparation for more profound and transformative changes.

Read more

A Data-Driven Approach to Multichannel Online Marketing

Client Results (6)

AGM, a globally operating digital marketing agency, develops advertising strategies and executes online marketing campaigns for its customers from a broad range of sectors. Their challenge was to determine the best possible allocation of marketing funds among multiple online channels, optimizing the overall effectiveness and return of investment of its marketing campaigns.

Read more

Wolfram Supports Organizations Large and Small