Cryptheory – Just Crypto

Cryptocurrencies are our life! Get an Overview of Market News

Microsoft’s Thought Algorithm gives artificial intelligence a human-like way of thinking

6 min read

Microsoft’s Algorithm-of-Thoughts (AoT) takes artificial intelligence one step forward. The system is intended to enable human-like thinking and problem solving while also being energy efficient. AoT uses algorithmic examples to streamline complex tasks, reducing the need for numerous queries. AoT stands out for its efficiency in handling complex processes, making it a potential breakthrough for AI applications.

In the ever-evolving field of artificial intelligence (AI), language models have moved from language understanding to versatile problem solvers, primarily driven by the concept of contextual learning.

Microsoft ‘s “Algorithm of Thoughts” further advances this development and enables human-like thinking, planning and solving mathematical problems in an energy-efficient manner.

By using algorithmic examples, AoT unlocks the potential of language models to explore a variety of ideas with just a few queries.

The following explores the development of prompt-based contextual learning approaches and shows how AoT transforms artificial intelligence for human-like reasoning.

Contextual learning

Microsoft's Thought Algorithm gives artificial intelligence a human-like way of thinking

In-context learning is a transformative process that aims to transform language models from mere language experts into adept problem solvers.

To better understand this concept, imagine these models as language learners in a school. Initially, her lessons consist primarily of delving into large amounts of text to acquire knowledge of words and facts.

However, contextual learning takes these learners to the next level by enabling them to acquire specialized skills.

Let’s say you send these students to specialized training programs like a college or a vocational school.

In this phase, they focus on developing special skills and mastering various tasks such as language translation (e.g. Seamless M4T from Meta ), code generation or solving complex problems.

In the past, language models only became specialized when they were retrained with new data , known as fine-tuning. This proved difficult as models became increasingly large and resource-intensive.

Prompt-based methods have been developed to solve these problems. Instead of retraining the entire model, it is provided with clear instructions, e.g. E.g. answering questions or writing code.

This approach is characterized by its exceptional control, transparency and efficiency over data and computing resources, making it an extremely practical choice for a wide range of applications.

Development of prompt-based learning

This section provides a brief overview of the evolution of prompt-based learning from standard prompting to Chain-of-Thought (CoT) and Tree-of-Thought (ToT).

Standard-Prompting

In 2021, researchers conducted a groundbreaking experiment. They got a single generatively pre-trained model, T0, to excel at 12 different NLP tasks.

These tasks included structured instructions such as: B. the one used for the inference: “If {assumption} is true, is it also true that {hypothesis} is true? ||| {contain}.”

The results were astonishing, as T0 outperformed models trained only on single tasks and was even better at new tasks.

This experiment introduced the prompt-based approach, also known as input-output or standard prompting.

Standard prompting is a simple method of presenting the model with a few task-related examples before expecting a response.

For example, you can ask it to solve equations like “2x + 3 = 11” (solution: “x = 4”). This method is suitable for simple tasks such as solving simple mathematical equations or translations.

However, because standard prompting relies on isolated instructions, it struggles with understanding broader contexts and multi-level thinking.

This makes it inefficient for tackling complex math problems, logical thinking, and planning tasks.

The limitations of standard prompting have led to the development of the CoT system, which compensates for these disadvantages.

Chain-of-Thought (CoT) Prompting

CoT is a prompter technique that helps large language models (LLM) solve problems. To do this, they are divided into a series of intermediate steps that lead to a final answer.

This approach improves the model’s reasoning skills by encouraging it to respond to complex, multi-step problems in a manner that resembles a logical chain of thought.

CoT prompting proves particularly valuable in helping LLMs complete tasks that require logical thinking and multiple steps, such as: B. arithmetic problems and questions related to common sense.

An example: One could use CoT prompting to solve a complex physics problem, e.g. B. to calculate the distance a car covers when accelerating.

CoT prompts guide the language model through logical steps, starting with the initial speed of the car, applying the distance formula, and simplifying the calculations.

This illustrates how CoT prompts break down complicated problems step by step and help the model draw accurate conclusions.

Tree-of-Thought (ToT) Prompting

However, in certain scenarios, solving problems may involve multiple approaches.

Conventional step-by-step methods such as CoT can limit the exploration of different solutions.

Tree -of-Thought Prompting addresses this challenge by using prompts structured as decision trees, allowing language models to consider multiple paths.

This method enables models to look at problems from different perspectives, expanding the range of possibilities and encouraging creative solutions.

Challenges of prompt-based learning

Prompt-based approaches have undoubtedly improved the mathematical and reasoning abilities of language models.

At the same time, they also have a crucial disadvantage: the need for queries and computing resources increases exponentially.

Every request made to an online language model like GPT-4 incurs financial costs and contributes to latency, a critical bottleneck for real-time applications. These cumulative delays can undermine solution efficiency.

Additionally, constant interactions can put a strain on systems, leading to bandwidth limitations and reduced model availability. The impact on the environment must also be considered.

Regular queries increase the energy consumption of already energy-intensive data centers, which further worsens their carbon footprint.

Algorithm-of-Thought Prompting

Microsoft has taken on the challenge of improving prompt-based methods in terms of cost, energy efficiency and response time.

They introduced Algorithm of Reasoning (AoT) , a groundbreaking approach that reduces the need for many prompts in complex tasks while maintaining performance.

AoT is different from previous prompting methods. The language models are instructed to generate task-specific pseudocode, similar to clear instructions in Python .

This places emphasis on leveraging the model’s internal thought processes rather than relying on potentially unreliable inputs and outputs at every step.

AoT also includes contextual examples based on search algorithms such as “Depth First Search” and “Breadth First Search,” which help the model break down complicated problems into manageable steps and identify promising solutions.

Although AoT shares similarities with the Tree-of-Thought (ToT) approach, it stands out for its remarkable efficiency.

ToT often requires a large number of language model queries, occasionally numbering in the hundreds for a single problem. In contrast, AoT overcomes this challenge by orchestrating the entire thought process in a single context.

AoT is particularly suitable for tasks similar to tree searches. In these scenarios, the problem-solving process involves breaking the main problem into smaller components, developing solutions for each part, and deciding which options to delve into.

Instead of using separate queries for each subset of the problem, AoT leverages the model’s iterative capabilities to address them in a unified step.

This approach integrates insights from previous contexts and demonstrates its capabilities in tackling complex problems that require deep immersion in the solution field.

Conclusion

Microsoft ‘s Algorithm-of-Thoughts (AoT) transforms AI by enabling human-like thinking, planning, and math problem-solving in an energy-efficient way.

AoT uses algorithmic examples to enable language models to explore different ideas with just a few queries.

AoT builds on the development of prompt-based learning and is characterized by its performance and efficiency when tackling complex tasks.

Not only does it improve AI capabilities, but it also mitigates the challenges posed by resource-intensive query methods.

With AoT, language models can make multi-level inferences and solve challenging problems, opening new possibilities for AI-powered applications.

Crypto exchanges with the lowest fees 2023

 

All content in this article is for informational purposes only and in no way serves as investment advice. Investing in cryptocurrencies, commodities and stocks is very risky and can lead to capital losses.