Techcrunch Ai Glossary | Techcrunch


Artificial intelligence is a deep and complex world. Scientists who work in this field often depend on terms and committee to explain what they are working on. As a result, we often have to use these technical terms in our coverage of the artificial intelligence industry. For this reason, we thought it would be useful to assemble a radio with definitions of some of the most important words and phrases that we use in our articles.

We will update this diligence regularly to add new entries, as researchers constantly reveal new ways to push the boundaries of artificial intelligence while identifying the risks of emerging safety.


Artificial intelligence agent refers to a tool that uses artificial intelligence techniques to perform a series of tasks on your behalf – beyond what Chatbot can do from artificial intelligence – such as deposit expenses, tickets or table booking in a restaurant or even writing and preserving the code. However, as we have Make upThere are a lot of moving pieces in this emerging space, so different people can mean different things when they refer to the artificial intelligence agent. The infrastructure is still built to provide perceived capabilities. But the basic concept involves an independent system that may depend on multiple AI systems to carry out multiple -step tasks.

Looking at a simple question, the human brain can answer without thinking a lot about it – things like “Which animal is taller between the giraffe and the cat?” But in many cases, you often need a pen and paper to find the correct answer because there are intermediate steps. For example, if one of the farmers has chicken and cows, it has 40 heads and 120 legs, you may need to write a simple equation to reach the answer (20 chicken and 20 cows).

In the context of artificial intelligence, thinking about a series of ideas for large language models means dividing a problem into smaller and medium steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is likely to be a right, especially in the context of logic or coding. Surveillance thinking models are developed from traditional large language models and improved for amazing thinking thanks to the learning of reinforcement.

(He sees: Language model))

A sub -learning set that provides self -provides artificial intelligence algorithms in the ANN. This allows them to make more complicated connections compared to simpler -based systems, such as linear models or decision -making. The structure of deep learning algorithms is inspired by inspiration from the pathways of neurons in the human brain.

AIS deep learning is able to determine the important characteristics of the same data, instead of asking human beings to determine these features. The structure also supports algorithms that can learn from errors, and through the process of repetition and modification, improving its own outputs. However, deep learning systems require a lot of data points to achieve good results (millions or more). It usually takes longer to train the simplest automatic learning algorithms against automated learning algorithms – so development costs tend to be higher.

(He sees: Nerve network))

This means more training for the artificial intelligence model aimed at improving performance of a more specific mission or area than previously a pivotal point in its training-usually through feeding in new specialized data (IE to the task).

Many startups from artificial intelligence take large linguistic models as a starting point for building a commercial product, but they are competing to raise the utility of the target or important sector by completing previous training courses with careful installation based on its knowledge and experience in the field.

(He sees: Great Language Model (LLM)))

Big language models, or LLMS, are artificial intelligence models used by famous artificial intelligence assistants, such as Chatgptand Claudeand Gemini from Googleand Your Meta has llamaand Microsoft CopilotOr The cat is a cat. When you chat with AI’s assistant, you interact with a large language model that addresses your request directly or with the help of various available tools, such as browsing the web or translators translated symbol.

AI and LLMS assistants can have different names. For example, GPT is the Great Language Model in Openai and Chatgpt is the assistant product of Amnesty International.

LLMS deep nerve networks made of billions of numerical parameters (Or weights, see below) That learns the relationships between words and phrases and create a representation of language, which is a type of multi -dimensional map of words.

These are created from the patterns they find in billions of books, articles and copies. When you demand LLM, the model generates the most likely pattern that fits the claim. Then he evaluates the next most likely word after the last word based on what was said before. Repeat, repeat, and repeat.

(He sees: Nerve network))

The nerve network refers to the multi-layer algorithm structure that supports deep learning-and a broader range, full prosperity in the tools of obstetric intelligence after the appearance of large language models.

Although the idea of ​​consumption of the human brain pathways intensively as a design structure for data processing algorithms dates back to the 1940s, it was the most modern rise of graphic processing devices (GPU) – through the video game industry – which really removed theory of theory. These chips have proven that they are perfectly suitable for training algorithms with many layers more than it was possible in the previous era-enabling artificial intelligence systems based on the nerve network to achieve much better performance across many areas, whether for vocal recognition, independent mobility or drug detection.

(He sees: Great Language Model (LLM)))

Weights are essential for artificial intelligence training because they determine the amount of importance (or weight) for different features (or input variables) in the data used to train the system – thus forming the product of the artificial intelligence model.

In other words, weights are digital parameters that determine what is more prominent in a data set for the specified training mission. It achieves its function by applying the beating to the inputs. Typical training usually begins with the ministers that are randomly set, but with the development of the process, the weights are revealed as the model seeks to reach the output that matches the target more closely.

For example, the artificial intelligence model to predict the prices of homes that are trained in the historical real estate data of the target site can include advantages such as the number of bedrooms and bathrooms, whether it is a separate and semi -separate property, if he has or does not have a parking lot, etc.

Ultimately, the weights attached to the model on each of these inputs are a reflection of its impact on the value of the property, based on the specified data set.

Leave a Reply

Your email address will not be published. Required fields are marked *