- 샘플 포스트 2
- 가독성 1
- 테스트 2
- sample post 2
- readability 1
- test 4
- intro 1
- code 1
- highlighting 1
- 스타일 1
- style 1
- tags 1
- markdown file 1
- 샘플 1
- 블로그 포스트 1
- cheat sheet 2
- GAN 1
- CNN 1
- Reinforcement Learning 1
- Finance 1
- Entertainment 1
- Meta Learning 1
- cs231n 1
- One-Shot Learning 1
- Code 1
- Project 1
- Java 1
- MSA 1
- mysql 1
- Python 1
- Docker 1
- RNN 1
- LSTM 1
- 논문리뷰 1
- 논문구현 2
- Paper Review 1
- Paper Implement 1
- Kears 1
- Pytorch 2
- Tensorflow 1
- Fine Tuning 1
- Bayesian Statistic / Bayes’ Theorem / (> Bayes’ Rule) 1
- Attention(in RNN) 1
- Pr12 1
- Transfer Learning 1
- Hierarchical RL 1
- Dynamic Programming 1
- 계층형강화학습 1
- 역강화학습 1
- CUDA 1
- R2D2 1
- Likelihood 1
- Log-likelihood 2
- Likelihood Estimation 1
- MLE(Maximum Likelihood Estimation) 2
- MAP(Maximum a Posteriori Estimation) 1
- PDF(Probability Density Function) 1
- PMF(Probability Mass Function) 1
- GMM(Gaussian Mixture Model) 1
- EM(Expectation Maximization) 2
- ML(Probabilistic Perspective) 1
- 고전-통계학 vs 해석적?통계(베이지안) 1
- 독립(Independence) 1
- 결합확률(Joint Probability) 1
- 주변확률분포(Marginal Probability Distribution) 1
- 조건부확률(Conditional Probability) 1
- 전통적-Generative Model vs 딥-Generative Model 1
- independent and identical distributed(i.i.d.) 1
- Conjugate Prior 1
- 우도(Likelihood Probability) 1
- 사전확률(Prior Probability) 1
- 사후확률(Posterior Probability) 1
- 전확률 공식(Law of Total Probability) 또는 베이즈 법칙(Bayes’ Rule) 1
- 확률 이론 1
- 정보 이론 1
- Entropy 1
- KL-Divergence 1
- VAE(Variational Autoencoder) 1
- 정보엔트로피(Information Entropy) 1
- 분포(Distribution) 1
- 확률분포(Probability Distribution) 1
- 최소자승법(Linear Square) 1
- 최소평균제곱법(Linear Mean Square) 1
- Residual 2
- Cost=Error=오차=손실=비용 2
- Cross entropy error, CEE (교차 엔트로피 오차) 2
- Logistic Regression(Binary Classification) vs Multinomial Logistic Regression = Softmax(Multinomial Classification) 2
- Convex하다 Convex Function(볼록함수) 1
- Bernoulli Distribution / Gaussian Distribution 1
- Defining a Probability Distribution - Discrete Distribution(PMF) / Continuous Distribution(PDF) 1
- Conditional Distribution 1
- Expectations and Variance 1
- Some Important Distributions(Bernoulli/Poisson/Gaussian) 1
- Working with Probabilities(The log trick/Delayed Normalization/Jenson’s Inequality) 1
- 매니폴드러닝(Manifold Learning) 1
- 메타러닝(Meta Learning) 1
- 원샷러닝(One-shot Learning) 1
- 트랜스퍼러닝(Transfer Learning) 1
- 이미테이션러닝(Imitation Learning) 1
- 계층형강화학습(Hierarchical RL) 1
- 역강화학습(Inverse RL, IRL) 1
- 분산 강화학습(Distributed Prioritized Experience Replay RL) 1
- 능동학습(Active Learning) 1
- 정리 3
- Introduction a Generative Model(생성모델) 1
- 이항분포(Binomial Distribution) 1
- 베르누이분포 1
- 정규분포(Normal Distribution) 1
- 가우시안분포 1
- 다항분포(Multinomial Distribution) 1
- 베르누이시행/베르누이독립시행 1
- Likelihood와 Probability의 차이 1
- Likelihood(가능도) 1
- Estimator 종류 1
- XX-Likelihood Estimation(가능도 추정량) 1
- Discriminative Approach & Generative Approach 1
- GDA(Gaussian Discriminant Analysis) 1
- GMM(Gaussian Mixture Model) = MoG(Mixture of Gaussian) 1
- Linear Regression 2
- Naïve Bayes 1
- Gaussian Discriminant Analysis 3
- Joint Likelihood 1
- Joint Probability 1
- Analytic solution 1
- 모수(Parameters) 1
- Bayes’ Rule 1
- Expectation 1
- Lower Bound of Log-likelihood 1
- Lower Bound of Log-likelihood = Log-likelihood 함수의 평균(기대값)을 의미? 1
- Jensen’s Inequality 1
- Posterior Probability(사후확률) / Prior Probability(사전확률) 1
- Computer Vision Task 1
- CNN Approach 1
- Localization 1
- Detection 1
- Segmentation 1
- Region Proposal 1
- R-CNN 1
- YOLO 1
- SSD 1
- ConvNet 시각화 2
- DeepDream 2
- CNN representations control 2
- Activation control 2
- Gradient control 2
- Style Transfer 2
- Information Theory 1
- 응용수학 1
- 섀넌 엔트로피(Shannon entropy) 1
- 엔트로피 1
- 분포가 결정적(deterministic) 1
- 분포가 균등적(uniform) 1
- KL Divergence(KLD) 1
- DL의 손실함수(Loss function) 1
- 크로스 엔트로피 1
- Probability 1
- Probability theory 1
- 확률변수(Random variable) 1
- 확률분포(Probability distribution) 1
- 베이즈통계 1
- 이론 1
- 코드구현 1
- Gaussian Mixture Model 1
- Naive Bayes 1
- Discriminator Model 1
- Generative Model 1
- EM-algorithm 1
- Logistic Regression 1
- GMM(Gaussian Mixture Model) with EM algorithm 1
- fcs231n 1
- Data Augmentation 1
- 논문 1
- 코드 2
- 논문읽기 1
- DL 2
- 수학기호 1
- 용어정리 1
- LaTex 1
- MATHJAX 1
- 요약 1
- 증명 1
샘플 포스트
- A Full and Comprehensive Style Test(Korean ver.)
- Testing Readability with a Bunch of Text(Korean ver.)
가독성
테스트
- A Full and Comprehensive Style Test(Korean ver.)
- Testing Readability with a Bunch of Text(Korean ver.)
sample post
readability
test
- Hello gitpage post with markdown
- You're up and running!
- A Full and Comprehensive Style Test
- Testing Readability with a Bunch of Text
intro
code
highlighting
스타일
style
tags
markdown file
샘플
블로그 포스트
cheat sheet
GAN
CNN
Reinforcement Learning
Finance
Entertainment
Meta Learning
cs231n
One-Shot Learning
Code
Project
Java
MSA
mysql
Python
Docker
RNN
LSTM
논문리뷰
논문구현
Paper Review
Paper Implement
Kears
Pytorch
Tensorflow
Fine Tuning
Bayesian Statistic / Bayes’ Theorem / (> Bayes’ Rule)
Attention(in RNN)
Pr12
Transfer Learning
Hierarchical RL
Dynamic Programming
계층형강화학습
역강화학습
CUDA
R2D2
Likelihood
Log-likelihood
Likelihood Estimation
MLE(Maximum Likelihood Estimation)
MAP(Maximum a Posteriori Estimation)
PDF(Probability Density Function)
PMF(Probability Mass Function)
GMM(Gaussian Mixture Model)
EM(Expectation Maximization)
ML(Probabilistic Perspective)
고전-통계학 vs 해석적?통계(베이지안)
독립(Independence)
결합확률(Joint Probability)
주변확률분포(Marginal Probability Distribution)
조건부확률(Conditional Probability)
전통적-Generative Model vs 딥-Generative Model
independent and identical distributed(i.i.d.)
Conjugate Prior
우도(Likelihood Probability)
사전확률(Prior Probability)
사후확률(Posterior Probability)
전확률 공식(Law of Total Probability) 또는 베이즈 법칙(Bayes’ Rule)
확률 이론
정보 이론
Entropy
KL-Divergence
VAE(Variational Autoencoder)
정보엔트로피(Information Entropy)
분포(Distribution)
확률분포(Probability Distribution)
최소자승법(Linear Square)
최소평균제곱법(Linear Mean Square)
Residual
Cost=Error=오차=손실=비용
Cross entropy error, CEE (교차 엔트로피 오차)
Logistic Regression(Binary Classification) vs Multinomial Logistic Regression = Softmax(Multinomial Classification)
Convex하다 Convex Function(볼록함수)
Bernoulli Distribution / Gaussian Distribution
Defining a Probability Distribution - Discrete Distribution(PMF) / Continuous Distribution(PDF)
Conditional Distribution
Expectations and Variance
Some Important Distributions(Bernoulli/Poisson/Gaussian)
Working with Probabilities(The log trick/Delayed Normalization/Jenson’s Inequality)
매니폴드러닝(Manifold Learning)
메타러닝(Meta Learning)
원샷러닝(One-shot Learning)
트랜스퍼러닝(Transfer Learning)
이미테이션러닝(Imitation Learning)
계층형강화학습(Hierarchical RL)
역강화학습(Inverse RL, IRL)
분산 강화학습(Distributed Prioritized Experience Replay RL)
능동학습(Active Learning)
정리
- ConvNet, Visualizing and Understanding
- Localization, Detection and Segmentation
- 확률이론-전통적인 확률 모델(Probability Model)
Introduction a Generative Model(생성모델)
이항분포(Binomial Distribution)
베르누이분포
정규분포(Normal Distribution)
가우시안분포
다항분포(Multinomial Distribution)
베르누이시행/베르누이독립시행
Likelihood와 Probability의 차이
Likelihood(가능도)
Estimator 종류
XX-Likelihood Estimation(가능도 추정량)
Discriminative Approach & Generative Approach
GDA(Gaussian Discriminant Analysis)
GMM(Gaussian Mixture Model) = MoG(Mixture of Gaussian)
Linear Regression
Naïve Bayes
Gaussian Discriminant Analysis
- fcs231n Lec1 - 전통적인 생성 모델(Classic Generative Model)
- fcs231n Lec1 - 전통적인 생성 모델(Classic Generative Model)
- 확률이론-전통적인 확률 모델(Probability Model)