← ホームに戻る

AI用語集

AIと機械学習の重要な用語をわかりやすく解説します。

用語 (20)

LLM (Large Language Model)

A type of AI model trained on massive text datasets to understand and generate human language. Examples include GPT-4, Claude, and Llama.

Transformer

A neural network architecture that uses self-attention mechanisms to process sequential data. The foundation of modern language models.

Token

The basic unit of text that language models process. A token can be a word, part of a word, or a character. English text averages about 4 characters per token.

Prompt

The input text or instructions given to an AI model to generate a response. Prompt engineering is the practice of crafting effective prompts.

Fine-tuning

The process of further training a pre-trained model on a specific dataset to adapt it for a particular task or domain.

RAG (Retrieval-Augmented Generation)

A technique that combines information retrieval with text generation, allowing AI models to access external knowledge bases for more accurate responses.

Hallucination

When an AI model generates information that sounds plausible but is factually incorrect or fabricated.

Context Window

The maximum amount of text (measured in tokens) that a language model can process in a single interaction.

Temperature

A parameter that controls the randomness of AI model outputs. Lower values produce more deterministic responses, higher values more creative ones.

Embedding

A numerical representation of text in a high-dimensional vector space, used for semantic search and similarity comparisons.

AGI (Artificial General Intelligence)

A hypothetical AI system that can understand, learn, and apply knowledge across any intellectual task that a human can perform.

Neural Network

A computing system inspired by biological neural networks, consisting of interconnected nodes that process information in layers.

Diffusion Model

A type of generative model that creates data by gradually removing noise from a random signal. Used in DALL-E, Stable Diffusion, and Midjourney.

Attention Mechanism

A component in neural networks that allows the model to focus on relevant parts of the input when producing output.

Zero-shot Learning

The ability of an AI model to perform a task it was not explicitly trained on, using only its general knowledge and the task description.

Few-shot Learning

A technique where an AI model learns to perform a task from just a few examples provided in the prompt.

RLHF

Reinforcement Learning from Human Feedback. A training technique where human preferences are used to fine-tune AI models for more helpful and safe responses.

Multimodal

AI models that can process and generate multiple types of data, such as text, images, audio, and video.

Inference

The process of using a trained AI model to generate predictions or outputs from new input data.

Latent Space

A compressed representation of data learned by a model, where similar items are positioned close together in the mathematical space.

クイックリファレンス
20
用語
20
表示中
A-Z
並び順
カテゴリ
モデルとアーキテクチャ • 学習とトレーニング • プロンプト • データと処理 • 安全性と倫理 • アプリケーション

概要

AI用語を簡潔な定義と例で学ぶ。

活用シーン

  • 新メンバーのオンボーディング。
  • 記事の専門用語を理解。
  • オンボーディング中の用語を明確にする。

手順

  1. 用語を検索・閲覧。
  2. 定義と文脈を読む。
  3. 説明をコピー/共有。

例 1
入力
Term: RAG
出力
Retrieval-Augmented Generation...
一般的なAI手法を説明。
例 2
入力
Term: Temperature
出力
Controls randomness of outputs...
モデル設定の意味を説明。
例 3
入力
Term: Token
出力
Smallest unit of text a model processes.
一般的な用語を説明。

よくあるミス

  • 定義は簡略版。
  • 用語は変化が早い。
  • 定義は簡略版なので重要語は確認。

コツ

  • キーワードとして深掘りする。
  • 重要な場合は複数ソースを比較。

よくある質問

定義は権威的ですか?
実用的な要約で、正式な標準ではありません。
データは送信されますか?
いいえ。すべてブラウザ内で処理されます。
オフラインで使えますか?
はい。ページ読み込み後は多くのツールがオフラインで動作しますが、フォントなど一部の資産は接続が必要な場合があります。

データとプライバシー

すべての処理はブラウザ内で行われ、データは送信・保存されません。