縮寫 | 英語 | 漢語 |
---|---|---|
A | ||
Activation Function | 激活函數(shù) | |
Adversarial Networks | 對抗網(wǎng)絡(luò) | |
Affine Layer | 仿射層 | |
agent | 代理/智能體 | |
algorithm | 算法 | |
alpha-beta pruning | α-β剪枝 | |
anomaly detection | 異常檢測 | |
approximation | 近似 | |
AGI | Artificial General Intelligence | 通用人工智能 |
AI | Artificial Intelligence | 人工智能 |
association analysis | 關(guān)聯(lián)分析 | |
attention mechanism | 注意機制 | |
autoencoder | 自編碼器 | |
ASR | automatic speech recognition | 自動語音識別 |
automatic summarization | 自動摘要 | |
average gradient | 平均梯度 | |
Average-Pooling | 平均池化 | |
B | ||
BP | backpropagation | 反向傳播 |
BPTT | Backpropagation Through Time | 通過時間的反向傳播 |
BN | Batch Normalization | 分批標準化 |
Bayesian network | 貝葉斯網(wǎng)絡(luò) | |
Bias-Variance Dilemma | 偏差/方差困境 | |
Bi-LSTM | Bi-directional Long-Short Term Memory | 雙向長短期記憶 |
bias | 偏置/偏差 | |
big data | 大數(shù)據(jù) | |
Boltzmann machine | 玻爾茲曼機 | |
C | ||
CPU | Central Processing Unit | 中央處理器 |
chunk | 詞塊 | |
clustering | 聚類 | |
cluster analysis | 聚類分析 | |
co-adapting | 共適應 | |
co-occurrence | 共現(xiàn) | |
Computation Cost | 計算成本 | |
Computational Linguistics | 計算語言學 | |
computer vision | 計算機視覺 | |
concept drift | 概念漂移 | |
CRF | conditional random field | 條件隨機域/場 |
convergence | 收斂 | |
CA | conversational agent | 會話代理 |
convexity | 凸性 | |
CNN | convolutional neural network | 卷積神經(jīng)網(wǎng)絡(luò) |
Cost Function | 成本函數(shù) | |
cross entropy | 交叉熵 | |
D | ||
Decision Boundary | 決策邊界 | |
Decision Trees | 決策樹 | |
DBN | Deep Belief Network | 深度信念網(wǎng)絡(luò) |
DCGAN | Deep Convolutional Generative Adversarial Network | 深度卷積生成對抗網(wǎng)絡(luò) |
DL | deep learning | 深度學習 |
DNN | deep neural network | 深度神經(jīng)網(wǎng)絡(luò) |
Deep Q-Learning | 深度Q學習 | |
DQN | Deep Q-Network | 深度Q網(wǎng)絡(luò) |
DNC | differentiable neural computer | 可微分神經(jīng)計算機 |
dimensionality reduction algorithm | 降維算法 | |
discriminative model | 判別模型 | |
discriminator | 判別器 | |
divergence | 散度 | |
domain adaption | 領(lǐng)域自適應 | |
Dropout | ||
Dynamic Fusion | 動態(tài)融合 | |
E | ||
Embedding | 嵌入 | |
emotional analysis | 情緒分析 | |
End-to-End | 端到端 | |
EM | Expectation-Maximization | 期望最大化 |
Exploding Gradient Problem | 梯度爆炸問題 | |
ELM | Extreme Learning Machine | 超限學習機 |
F | ||
FAIR | Facebook Artificial Intelligence Research | Facebook人工智能研究所 |
factorization | 因子分解 | |
feature engineering | 特征工程 | |
Featured Learning | 特征學習 | |
Feedforward Neural Networks | 前饋神經(jīng)網(wǎng)絡(luò) | |
G | ||
game theory | 博弈論 | |
GMM | Gaussian Mixture Model | 高斯混合模型 |
GA | Genetic Algorithm | 遺傳算法 |
Generalization | 泛化 | |
GAN | Generative Adversarial Networks | 生成對抗網(wǎng)絡(luò) |
Generative Model | 生成模型 | |
Generator | 生成器 | |
Global Optimization | 全局優(yōu)化 | |
GNMT | Google Neural Machine Translation | 谷歌神經(jīng)機器翻譯 |
Gradient Descent | 梯度下降 | |
graph theory | 圖論 | |
GPU | graphics processing unit | 圖形處理單元/圖形處理器 |
H | ||
HDM | hidden dynamic model | 隱動態(tài)模型 |
hidden layer | 隱藏層 | |
HMM | Hidden Markov Model | 隱馬爾可夫模型 |
hybrid computing | 混合計算 | |
hyperparameter | 超參數(shù) | |
I | ||
ICA | Independent Component Analysis | 獨立成分分析 |
input | 輸入 | |
ICML | International Conference for Machine Learning | 國際機器學習大會 |
language phenomena | 語言現(xiàn)象 | |
latent dirichlet allocation | 隱含狄利克雷分布 | |
J | ||
JSD | Jensen-Shannon Divergence | JS距離 |
K | ||
K-Means Clustering | K-均值聚類 | |
K-NN | K-Nearest Neighbours Algorithm | K-最近鄰算法 |
Knowledge Representation | 知識表征 | |
KB | knowledge base | 知識庫 |
L | ||
Latent Dirichlet Allocation | 隱狄利克雷分布 | |
LSA | latent semantic analysis | 潛在語義分析 |
learner | 學習器 | |
Linear Regression | 線性回歸 | |
log likelihood | 對數(shù)似然 | |
Logistic Regression | Logistic回歸 | |
LSTM | Long-Short Term Memory | 長短期記憶 |
loss | 損失 | |
M | ||
MT | machine translation | 機器翻譯 |
Max-Pooling | 最大池化 | |
Maximum Likelihood | 最大似然 | |
minimax game | 最小最大博弈 | |
Momentum | 動量 | |
MLP | Multilayer Perceptron | 多層感知器 |
multi-document summarization | 多文檔摘要 | |
MLP | multi layered perceptron | 多層感知器 |
multimodal learning | 多模態(tài)學習 | |
multiple linear regression | 多元線性回歸 | |
N | ||
Naive Bayes Classifier | 樸素貝葉斯分類器 | |
named entity recognition | 命名實體識別 | |
Nash equilibrium | 納什均衡 | |
NLG | natural language generation | 自然語言生成 |
NLP | natural language processing | 自然語言處理 |
NLL | Negative Log Likelihood | 負對數(shù)似然 |
NMT | Neural Machine Translation | 神經(jīng)機器翻譯 |
NTM | Neural Turing Machine | 神經(jīng)圖靈機 |
NCE | noise-contrastive estimation | 噪音對比估計 |
non-convex optimization | 非凸優(yōu)化 | |
non-negative matrix factorization | 非負矩陣分解 | |
Non-Saturating Game | 非飽和博弈 | |
O | ||
objective function | 目標函數(shù) | |
Off-Policy | 離策略 | |
On-Policy | 在策略 | |
one shot learning | 一次性學習 | |
output | 輸出 | |
P | ||
Parameter | 參數(shù) | |
parse tree | 解析樹 | |
part-of-speech tagging | 詞性標注 | |
PSO | Particle Swarm Optimization | 粒子群優(yōu)化算法 |
perceptron | 感知器 | |
polarity detection | 極性檢測 | |
pooling | 池化 | |
PPGN | Plug and Play Generative Network | 即插即用生成網(wǎng)絡(luò) |
PCA | principal component analysis | 主成分分析 |
Probability Graphical Model | 概率圖模型 | |
Q | ||
QNN | Quantized Neural Network | 量子化神經(jīng)網(wǎng)絡(luò) |
quantum computer | 量子計算機 | |
Quantum Computing | 量子計算 | |
R | ||
RBF | Radial Basis Function | 徑向基函數(shù) |
Random Forest Algorithm | 隨機森林算法 | |
ReLU | Rectified Linear Unit | 線性修正單元/線性修正函數(shù) |
RNN | Recurrent Neural Network | 循環(huán)神經(jīng)網(wǎng)絡(luò) |
recursive neural network | 遞歸神經(jīng)網(wǎng)絡(luò) | |
RL | reinforcement learning | 強化學習 |
representation | 表征 | |
representation learning | 表征學習 | |
Residual Mapping | 殘差映射 | |
Residual Network | 殘差網(wǎng)絡(luò) | |
RBM | Restricted Boltzmann Machine | 受限玻爾茲曼機 |
Robot | 機器人 | |
Robustness | 穩(wěn)健性 | |
RE | Rule Engine | 規(guī)則引擎 |
S | ||
saddle point | 鞍點 | |
Self-Driving | 自動駕駛 | |
SOM | self organised map | 自組織映射 |
Semi-Supervised Learning | 半監(jiān)督學習 | |
sentiment analysis | 情感分析 | |
SLAM | simultaneous localization and mapping | 同步定位與地圖構(gòu)建 |
SVD | Singular Value Decomposition | 奇異值分解 |
Spectral Clustering | 譜聚類 | |
Speech Recognition | 語音識別 | |
SGD | stochastic gradient descent | 隨機梯度下降 |
supervised learning | 監(jiān)督學習 | |
SVM | Support Vector Machine | 支持向量機 |
synset | 同義詞集 | |
T | ||
t-SNE | T-Distribution Stochastic Neighbour Embedding | T-分布隨機近鄰嵌入 |
tensor | 張量 | |
TPU | Tensor Processing Units | 張量處理單元 |
the least square method | 最小二乘法 | |
Threshold | 闕值 | |
Time Step | 時間步驟 | |
tokenization | 標記化 | |
treebank | 樹庫 | |
transfer learning | 遷移學習 | |
Turing Machine | 圖靈機 | |
U | ||
unsupervised learning | 無監(jiān)督學習 | |
V | ||
Vanishing Gradient Problem | 梯度消失問題 | |
VC Theory | Vapnik–Chervonenkis theory | 萬普尼克-澤范蘭杰斯理論 |
von Neumann architecture | 馮·諾伊曼架構(gòu)/結(jié)構(gòu) | |
W | ||
WGAN | Wasserstein GAN | |
W | weight | 權(quán)重 |
word embedding | 詞嵌入 | |
WSD | word sense disambiguation | 詞義消歧 |
X | ||
Y | ||
Z | ||
ZSL | zero-shot learning | 零次學習 |
zero-data learning | 零數(shù)據(jù)學習 |
參考文獻
[1]機器學習.Tom M.Mitchell
更多建議: