TechCrunch AI Dictionary | Techcrunch
Artificial intelligence is a deep and confused world. Scientists who work in this field often rely on jargon and lingo to explain what they are doing. As a result, we often need to use these technical conditions in our reflection of the artificial intelligence industry. Therefore, we decided that it would be useful to form a dictionary with the definitions of some of the most important words and phrases we use in our articles.
We will regularly update this dictionary to add new records, as researchers are constantly discovering new methods to push the border of artificial intelligence while identifying the risks of safety.
AI Agent refers to an instrument that uses AI technologies to perform a series of tasks on your behalf – beyond what can make a more mainly AI chatbot – such as costs to file costs, booking tickets or a table in a restaurant or even writing and maintaining a code. However, as we explained beforeThere are many moving pieces in this space, so different people can mean different things when referring to AI agent. Infrastructure is also still being built to deliver the intended opportunities. But the basic concept implies an autonomous system that can use multiple AI systems to perform multi -stage tasks.
Given a simple question, the human brain can answer without even thinking too much about it – things like “Which animal is higher between giraffe and cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are mediation steps. For example, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you may need to write a simple equation to get out with the answer (20 chickens and 20 cows).
In the context of AI, the reasoning of the chain for large language models means breaking a problem in smaller, intermediate steps to improve the quality of the end result. It usually takes more time to get an answer, but the answer is more likely to be right, especially in logic or encoding context. The so -called reasoning models have been developed by traditional large languages ​​models and optimized to think about the thinking chain thanks to the strengthening of training.
(See: Linguist))
A subset of self -cultivating machine learning, in which AI algorithms are designed with a multi -layered, artificial neural network (ANN) structure. This allows them to make more sophisticated correlations than simpler machine-based systems, such as linear models or decisions. The structure of deep training algorithms draws inspiration from the interconnected pathways of neurons in the human brain.
AI deep training is able to identify important characteristics in the data themselves, rather than requiring human engineers to determine these characteristics. The structure also supports algorithms that can be learned from errors and, through a repetition and correction process, improve their own results. However, deep training systems require many data points to produce good results (millions or more). It usually takes more time to train deep training against more simple machine learning algorithms -so development costs are usually higher.
(See: Neural network))
This means further training of an AI model that aims to optimize efficiency for a more specific task or field than before is the focal point of its training-ordinary by eating in new, specialized (ie task-oriented) data.
Many AI startups accept large language models as a starting point for building a commercial product, but try to enhance the usefulness for the target sector or task, complementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and experience.
(See: Large Language Model (LLM)))
Large language models or LLMS are AI models used by popular AI assistants such as Chatgpt., Clod., Google twins., Meta, you have llama., Microsoft Copilotor Mistral’s the CatS When talking to AI assistant, you interact with a large language language that processes your request directly or with the help of various available tools, such as web surfing or codes interpreters.
AI assistants and LLM can have different names. For example, GPT is the large language model of Openai, and Chatgpt is the AI ​​assistant product.
LLM are deep neural networks made of billions of numerical parameters (or weights, see below) who learn the relationships between words and phrases and create a language representation, something like a multidimensional map of words.
They are created by the encoding of models that find billions of books, articles and transcripts. When the LLM prompts, the model generates the most likely model that corresponds to the prompt. He then evaluates the most likely next word after the latter based on what has been said before. Repeat, repeat and repeat.
(See: Neural network))
The neural network refers to the multi-layer algorithmic structure that underlies deep training-and wider, the whole boom in generative AI instruments after the advent of large linguistic models.
Although the idea of ​​inspiration from dense interconnected pathways of the human brain as a design structure for data processing algorithms dates back to the 1940s, it is the much more recent graphics processing hardware (GPU) – through the video game industry – really unlock the power of theory. These chips proved to be very suitable for training algorithms with many more layers than possible in the earlier eras-to suspend neural network systems for AI to achieve far better productivity in many domains, whether for distribution, autonomous navigation or drug detection.
(See: Large Language Model (LLM)))
The weights are basic for AI training, as they determine how much relevant (or weight) is given to different characteristics (or input variables) in the data used to train the system – thus forming the output of the AI ​​model.
To put it another way, weights are numerical parameters that determine the most evident in a set of data for a given task. They achieve their function by applying multiplication to the entrances. The training of the model usually starts with weights, which are randomly, but as the process develops, weights are adjusted as the model strives to reach an output that is more coinciding with the goal.
For example, an AI model for forecasting housing prices, which are trained in historical real estate data for a target place, may include weights for functions such as the number of bedrooms and bathrooms, whether the property is separate, semi -separated, if there is or there is no parking, garage, etc.
Ultimately, the weights that the model attaches to each of these inputs is a reflection of how much they affect the value of the property based on a given set of data.