Difference between revisions of "DeepLearning"
From Hawk Wiki
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | ==Deep Learning== | |
− | ==Numpy/Python tricks== | + | ===Numpy/Python tricks=== |
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗∗ c ∗∗ d, a) is to use: | A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗∗ c ∗∗ d, a) is to use: | ||
<pre class="brush:python"> | <pre class="brush:python"> | ||
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X | X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X | ||
+ | </pre> | ||
+ | |||
+ | ===Common Steps For Data Pre Processing=== | ||
+ | Common steps for pre-processing a new dataset are: | ||
+ | |||
+ | *Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) | ||
+ | *Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) | ||
+ | *"Standardize" the data | ||
+ | |||
+ | ===Common functions=== | ||
+ | <pre class="brush:python"> | ||
+ | def sigmoid(z): | ||
+ | return 1 / (1 + np.exp(-z)) | ||
</pre> | </pre> |
Latest revision as of 02:37, 8 January 2018
Contents
Deep Learning
Numpy/Python tricks
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗∗ c ∗∗ d, a) is to use:
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
Common Steps For Data Pre Processing
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
Common functions
def sigmoid(z): return 1 / (1 + np.exp(-z))