๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ

728x90

๋จธ์‹ ๋Ÿฌ๋‹

10/11 ํ™” 1. ๋จธ์‹ ๋Ÿฌ๋‹ ์ž‘์—… ์ˆœ์„œ ๋ฐ ํ•™์Šต ๋ฐฉ๋ฒ• ๋ณ„ ๋Œ€ํ‘œ์ ์ธ ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค 2. GridSearchCV ๊ต์ฐจ ๊ฒ€์ฆ๊ณผ ์ตœ์  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹์„ ํ•œ ๋ฒˆ์— from sklearn.model_selection import GridSearchCV X_train, X_test, y_train, y_test = train_test_split(iris_data.data, iris_data.target, test_size = 0.2, random_state = 121) dtree = DecisionTreeClassifier() # max_depth = ๊ฒฐ์ • ํŠธ๋ฆฌ์˜ ์ตœ๋Œ€ ๊นŠ์ด, min_samples_splits = ์ž์‹ ๊ทœ์น™ ๋…ธ๋“œ๋ฅผ ๋ถ„ํ• ํ•ด ๋งŒ๋“ค๊ธฐ ์œ„ํ•œ ์ตœ์†Œํ•œ์˜ ์ƒ˜ํ”Œ ๋ฐ์ดํ„ฐ ๊ฐœ์ˆ˜ parameters = {'max_depth':[1,.. ๋”๋ณด๊ธฐ
5/19 ๋ชฉ_์ธ๊ณต์ง€๋Šฅ ํ”„๋กœ์ ํŠธ ์˜ค๋žœ๋งŒ!! ๋“œ๋””์–ด ์ธ๊ณต์ง€๋Šฅ ํ”„๋กœ์ ํŠธ๊ฐ€ ๋งˆ๋ฌด๋ฆฌ๋˜์—ˆ๋‹ค! 4/26 ํ™”์š”์ผ๋ถ€ํ„ฐ 5/17 ํ™”์š”์ผ๊นŒ์ง€ 22์ผ ๋™์•ˆ ์ž‘์—…ํ•˜๊ณ , 5/19 ์˜ค๋Š˜! ๋ฐœํ‘œ๋ฅผ ์ง„ํ–‰ํ–ˆ๋‹ค. ํ”„๋กœ์ ํŠธ ์ฃผ์ œ๋Š” ์šฐ๋ฆฌ๊ฐ€ ๋ฐฐ์šด ๋‚ด์šฉ์„ ํ™œ์šฉํ•˜์—ฌ, ์‚ฌํšŒ์ ์ธ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š”๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋Š” ์„œ๋น„์Šค๋กœ ์„ ์ •ํ•˜์˜€๋‹ค. ์กฐ์›๋“ค๊ณผ ๋ฏธ๋ฆฌ ๋Œ€๋ณธ๊ณผ ์‹œ์—ฐ ์˜์ƒ์„ ๋งŒ๋“ค์–ด ๋†“์•˜๊ณ  ๋ฐœํ‘œ๋Š” ๋‚ด๊ฐ€ ํ•จ! ์‚ฌ์ „์— ๋ฆฌํ—ˆ์„ค ์—ฌ๋Ÿฌ ๋ฒˆ ํ–ˆ์„ ๋•Œ๋Š” ๋ฌธ์ œ๊ฐ€ ์—†๋‹ค๊ฐ€, ๋ง‰์ƒ ๋ฐœํ‘œ ์ˆœ๋ฒˆ์ด ๋˜์—ˆ์„ ๋•Œ ์คŒ์œผ๋กœ ํ™”๋ฉด ๊ณต์œ ๊ฐ€ ์•ˆ ๋˜์–ด์„œ ๋‹นํ™ฉํ–ˆ๋‹คใ…œ ์คŒ์„ ๋‚˜๊ฐ”๋‹ค ๋‹ค์‹œ ๋“ค์–ด์˜ค๋Š๋ผ๊ณ  ์†Œ๋ฆฌ ๊ณต์œ  ์„ค์ •์ด ํ’€๋ฆฌ๋Š” ๋ฐ”๋žŒ์— ์‹œ์—ฐ ์˜์ƒ ์ดˆ๋ฐ˜๋ถ€ ์Œ์„ฑ์ด ์•ˆ ๋‚˜์™€์„œ.. ๋„ˆ๋ฌด ์•„์‰ฝ๋‹ค. ๐Ÿ˜‡ 1์กฐ - ์‹ ์šฉ์นด๋“œ ์‚ฌ์šฉ์ž ์—ฐ์ฒด ์˜ˆ์ธก AI ๊ฒฝ์ง„๋Œ€ํšŒ 2์กฐ - ๋ฐ์ด์ฝ˜ ์ƒ์œกํ™˜๊ฒฝ ์ตœ์ ํ™” ๊ฒฝ์ง„๋Œ€ํšŒ 3์กฐ - ์ˆ˜์–ด ์ธ์‹ ๋ชจ๋ธ ๊ตฌํ˜„ 4์กฐ - ๋ฐ์ด์ฝ˜ CV .. ๋”๋ณด๊ธฐ
4/15 ๊ธˆ ์•… ๊ธˆ์š”์ผ!!!!!!!!!!!!!!!!!!!!!!! ๐Ÿ˜‡๐Ÿฅณ ์˜ค๋Š˜์€ MNIST๋ฅผ CNN, Tensorflow 2.x, Colab์œผ๋กœ ๊ตฌํ˜„ํ•œ๋‹ค. Params(weights) = ksize Height × ksize Width × filter ๊ฐœ์ˆ˜ + b(filter ๊ฐœ์ˆ˜) 1. MNIST by CNN, Tensorflow 2.x, Colab import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Flatten, Dense from tensorflow.keras.layers import Conv2D, MaxP.. ๋”๋ณด๊ธฐ
4/14 ๋ชฉ ๋ชฉ์š”์ผ! ์˜ค๋Š˜์€ CNN์„ ์‹ค์ œ ์ฝ”๋“œ๋กœ ๊ตฌํ˜„ํ•ด๋ณธ๋‹ค! ์ด๋ฏธ์ง€ ํ•œ ์žฅ : 2์ฐจ์›(X, T) → Convolution Layer :: Feature Map : 2์ฐจ์› ์—ฌ๋Ÿฌ ๊ฐœ → Activation Map : 3์ฐจ์› → X ๋ฐ์ดํ„ฐ(์ด๋ฏธ์ง€ ์ •๋ณด. ์ด๋ฏธ์ง€ ๊ฐœ์ˆ˜, ์„ธ๋กœ, ๊ฐ€๋กœ, Channel) : 4์ฐจ์› Data โ–ถ conv : ํŠน์ง•์„ ๋ฝ‘์•„๋‚ธ ์ด๋ฏธ์ง€๊ฐ€ ์—ฌ๋Ÿฌ ์žฅ์ด ๋˜๋„๋ก ๋ฐ˜๋ณต ์ž‘์—…(์ด๋ฏธ์ง€ ๊ฐœ์ˆ˜, Feature Map ์„ธ๋กœ, Feature Map ๊ฐ€๋กœ, filter์˜ ๊ฐœ์ˆ˜) โ–ถ Pooling Layer :: conv ์ž‘์—…์„ ๊ฑฐ์นœ ์—ฌ๋Ÿฌ ์žฅ์˜ ๋ฐ์ดํ„ฐ ์‚ฌ์ด์ฆˆ๋ฅผ ์ค„์ž„ โ–ถ conv :: Pooling Layer๋ฅผ ๊ฑฐ์นœ ๋ฐ์ดํ„ฐ์˜ ํŠน์ง•์„ ๋˜ ๋ฝ‘์•„๋ƒ„ โ–ถ FLATTEN :: 4์ฐจ์› → 2์ฐจ์›(batch size ํฌํ•จํ•  ๋•Œ) 1. Cha.. ๋”๋ณด๊ธฐ
4/13 ์ˆ˜ ์ˆ˜์š”์ผ! CNN(Convolutional Neural Network, convnet. ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง)์œผ๋กœ ๋“ค์–ด๊ฐ„๋‹ค! Deep Learning(Deep Neural Network)์˜ ์ข…๋ฅ˜ : - Computer Vision : ์ปดํ“จํ„ฐ๊ฐ€ ์ด๋ฏธ์ง€๋‚˜ ๋น„๋””์˜ค๋ฅผ ๋ณด๊ณ  ์—ฌ๋Ÿฌ ๊ฐ์ฒด๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•˜๋Š” Computer Science ๋ถ„์•ผ. ๋Œ€ํ‘œ์ ์ธ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ CNN. ๋ชฉ์ ์€ pixel์„ ์ดํ•ดํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๊ณ ์•ˆํ•˜๋Š” ๊ฒƒ - NLP(Natural Language Process) : ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ. ๋Œ€ํ‘œ์ ์ธ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ RNN, LSTM ์ด๋ฏธ์ง€๋ฅผ ์ด๋ฃจ๋Š” ๊ฐ€์žฅ ์ž‘์€ ๋‹จ์œ„ → pixel ์ด๋ฏธ์ง€ ์ขŒํ‘œ๊ณ„ (Image coordinate) - 2์ฐจ์› ndarray๋กœ ํ‘œํ˜„ - pixel (์„ธ๋กœ, ๊ฐ€๋กœ) 1. ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ(Im.. ๋”๋ณด๊ธฐ
4/12 ํ™” ํ™”์š”์ผ! ํ˜„์žฌ์˜ Deep Learning์ด ์–ด๋Š ์ •๋„ ํšจ์œจ์„ ๋‚ด๊ธฐ ์‹œ์ž‘ํ•œ ์ด์œ ์— ๋Œ€ํ•ด ๊ณต๋ถ€ํ•œ๋‹ค. Weight์™€ Bias๋ฅผ ๋žœ๋ค ์ดˆ๊ธฐ๊ฐ’์œผ๋กœ ์‚ฌ์šฉํ•˜๋˜ ๊ฒƒ์„ Xievier/He's Initialization์œผ๋กœ ๋Œ€์ฒดํ•˜๊ณ , Vanishing Gradient ํ˜„์ƒ์„ Back-Propagation(ํ–‰๋ ฌ์—ฐ์‚ฐ, ์—ญ์˜ ๋ฐฉํ–ฅ์œผ๋กœ W, b๋ฅผ Update)๊ณผ Activation ํ•จ์ˆ˜๋ฅผ Sigmoid ๋Œ€์‹  ReLU๋ฅผ ์‚ฌ์šฉํ•˜๊ณ , ๊ณ„์‚ฐํ•ด์•ผ ํ•˜๋Š” W, b๋ฅผ Drop-out์œผ๋กœ ์—ฐ์‚ฐ์— ์‚ฌ์šฉ๋˜๋Š” Node๋ฅผ ์ค„์ž„์œผ๋กœ์จ ํ•ด๊ฒฐํ•จ 1. Multinomial Classification by Tensorflow 1.15 ver. import numpy as np import pandas as pd import tensorflow as tf imp.. ๋”๋ณด๊ธฐ
4/11 ์›” ์›”์š”์ผ! ์˜ค๋Š˜์€ ๋”ฅ๋Ÿฌ๋‹๊ณผ ์‹ ๊ฒฝ๋ง์— ๋Œ€ํ•ด ๋ฐฐ์šด๋‹ค! Perceptron์€ Neuron ํ•œ ๊ฐœ Deep Learning : ํ•œ ๊ฐœ์˜ Logistic Regression์„ ํ‘œํ˜„ํ•˜๋Š” node๊ฐ€ ์„œ๋กœ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š” ์‹ ๊ฒฝ๋ง ๊ตฌ์กฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ•œ ๊ฐœ์˜ ์ž…๋ ฅ์ธต, ํ•œ ๊ฐœ ์ด์ƒ์˜ ์€๋‹‰์ธต(๋งŽ์„์ˆ˜๋ก ํ•™์Šต์ด ์ž˜ ๋จ. 1~3๊ฐœ๊ฐ€ ์ ๋‹น), ํ•œ ๊ฐœ์˜ ์ถœ๋ ฅ์ธต์œผ๋กœ ์ด๋ฃจ์–ด์ง feed forward(propagation). XOR ๋ฌธ์ œ๋Š” node ํ•œ ๊ฐœ(perceptron)๋กœ๋Š” ํ•™์Šต์ด ์•ˆ ๋จ 1. Perceptron. GATE ์—ฐ์‚ฐ(AND, OR, XOR ์—ฐ์‚ฐ์ž)์„ Logistic Regression๊ณผ Tensorflow 1.5 Ver.์œผ๋กœ ๊ตฌํ˜„ import numpy as np import tensorflow as tf from sklear.. ๋”๋ณด๊ธฐ
4/8 ๊ธˆ ๊ธˆ์š”์ผ! ๐Ÿฑ‍๐Ÿ ์˜ค๋Š˜์€ Regression์„ ๋๋‚ธ๋‹ค~~ 4/11 ์›”์š”์ผ์€ ๋จธ์‹ ๋Ÿฌ๋‹ ํ•„๋‹ต ํ‰๊ฐ€, 4/17 ์ผ์š”์ผ์€ ์ˆ˜ํ–‰ํ‰๊ฐ€ 4๊ฐ€์ง€ ์ œ์ถœ์ด ์žˆ๋‹ค. ๊ฒฐ์ธก์น˜ ์ฒ˜๋ฆฌ๋Š” ์‚ญ์ œํ•˜๊ฑฐ๋‚˜, imputation(๋ณด๊ฐ„, ๋Œ€์ฒด) - ํ‰๊ท ํ™” ๊ธฐ๋ฒ•(๋…๋ฆฝ๋ณ€์ˆ˜๋ฅผ ๋Œ€ํ‘œ๊ฐ’์œผ๋กœ ๋Œ€์ฒด), ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ๋ฒ•(์ข…์†๋ณ€์ˆ˜๊ฐ€ ๋Œ€์ƒ. KNN) KNN(K-Nearest Neighbors, K-์ตœ๊ทผ์ ‘ ์ด์›ƒ) : hyperparameter๋Š” k(=1์ผ ๋•Œ ์–ด๋Š ์ •๋„์˜ ์„ฑ๋Šฅ ๋ณด์žฅ)์™€ ๊ฑฐ๋ฆฌ์ธก์ • ๋ฐฉ์‹(์ฃผ๋กœ ์œ ํด๋ผ๋””์•ˆ ์‚ฌ์šฉ) ๋ฐ˜๋“œ์‹œ ์ •๊ทœํ™”๋ฅผ ์ง„ํ–‰ํ•ด์•ผ ํ•จ. ๋ชจ๋“  ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๊ฑฐ๋ฆฌ๋ฅผ ๊ณ„์‚ฐํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆด ์ˆ˜ ์žˆ์Œ 1. Logistic Regression + KNN - BMI data import numpy as np import pandas as pd fro.. ๋”๋ณด๊ธฐ

728x90