Programming Language and Software | Software Links |
---|---|
Data Science in Python | Python |
Data Science in R | R |
Data Science in Excel | Excel |
Data Science in Power BI | Power BI |
Data Science in Tableau | Tableau |
Data Science is an interdisciplinary field that employs scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data, including both quantitative and qualitative data. Its application spans a wide range of domains, allowing for the utilization of acquired knowledge and actionable insights from data.
This practice involves honing programming skills, as well as gaining proficiency in mathematics and statistics, with the aim of deriving meaningful insights from structured and unstructured data, such as Kaggle datasets and real-world data. It involves a step-by-step learning process in the field of data science, encompassing analytical techniques, statistics, and research methods.
The most commonly utilized methods in data science include Regression, Clustering, Visualization, Decision Trees/Rules, and Random Forest. One must also learn the data analysis process using tools such as Python, R, Excel, Power BI, and Tableau. Moreover, aspiring data scientists should aim to expand their knowledge in machine learning and deep learning, fostering a comprehensive understanding of data and its analysis.
Completed Staff Work, similar to data analysis, empowers decision makers to identify solutions to problems or address issues through the careful consideration of reasonable and workable alternatives.
6. Select or identify the solution you want to recommend based on the results of your objective analysis.
7. Develop a plan to implement the solution and the documents necessary to authorize the implementation.
- Define Problem
- Data Collection
- Data Understanding
- Data Analysis/Cleaning
- Data Organization/Transformation
- Data Validation/Anomaly Detection
- Feature Engineering
- Model Training
- Model Evaluation/validation
- Model Monitoring
- Model Deployment
- Data Drift/Model Drift
- Reports
- Data Engineer
- Develops, constructs, tests, and maintains architectures such as databases and large-scales processing systems.
- Data Analyst
- Interprets data and turns data into information which can offer ways to improve business.
- Gather information from various sources and intrepret patterns and trends.
- Machine Learning Scientist
- Research and developed algorithms.
- Predictions from data with labels and features.
- Create a predictive models.
- Descriptive Analysis
- Text Analysis
- Statistical Analysis
- Diagnostic Analysis
- Predictive Analysis
- Prescriptive Analysis
- Supervised Data (Data pre-categorized or numerical)
- Classification (Predict a category)
- Regression (Predict a number)
- Unsupervised Data (Data is not labeled in any way)
- Clustering (Divide by similarity)
- Dimension Reduction (Generalization) - Find hidden dependencies
- Association (Identify Sequences)
-
Import, read, clean, and validate
- Define Variables
- Y is "Dependent Variable" and goes on y-axis (the left side, vertical one) - output value
- X is "Independent Variable" and goes on the x-axis (the bottom, horizontal one) - input value
- Type of Data
- Quantitative
- Ratio or Interval
- Discrete and Continuous
Discrete variables can only take certain numerical values and are counted
Continuous variables can take any numerical value and are measured
- Discrete and Continuous
- Qualitative
- Norminal or Ordinal
- Binary, nominal data, and ordinal data
Categorical variables take category or label values and place an individual into one of several groups.
- Binary, nominal data, and ordinal data
- Type of data measurements
- Nominal - names or labels variable
For example, gender: male and female. Other examples include eye colour and hair colour. - Ordinal - non-numeric concepts like satisfaction, happiness, discomfort, etc.
For example: is rating happiness on a scale of 1-10. - Interval - numeric scales in which we know both the order and the exact differences between the values
For example: interval data is temperature, the difference in temperature between 10-20 degrees is the same as the difference in temperature between 20-30 degrees. Likert scale is type of data. Likert scale is composed of a series of four or more Likert-type items that represent similar questions combined into a single composite score/variable. Likert scale data can be analyzed as interval data, i.e. the mean is the best measure of central tendency. use means and standard deviations to describe the scale. For example, it is a rating scale, often found on survey forms, that measures how people feel about something. It includes a series of questions that you ask people to answer, and ideally 5-7 balanced responses people can choose from. It often comes with a neutral midpoint. - Ratio - measurement scales
For example: data it must have a true zero, meaning it is not possible to have negative values in ratio data. Ratio data is measurements of height be that centimetres, metres, inches or feet.
- Nominal - names or labels variable
- Define Variables
-
Visualize distributions
- Univariate visualization
- Bivariate visualization
- Multivariate visualization
- Dimensionality reduction
-
Explore relations between variables
- Descriptive statistics
- Inferential statistics
- Statistical graphics
-
Explore multivariate relationships
-
Statistical Analysis
- Cases, Variables, Types of Variables
- Matrix and Frequency Table
- Graphs and Shapes of Distributions
- Mode, Median and Mean
- Range, Interquartile Range and Box Plot
- Variance and Standard deviation
- Z-scores
- Contingency Table, Scatterplot, Pearsonβs
- Basics of Regression
- Elementary Probability
- Random Variables and Probability Distributions
- Normal Distribution, Binomial Distribution & Poisson Distribution
- Hypothesis
3 Steps:
(1) Making an initial assumption.
(2) Collecting evidence (data).
(3) Based on the available evidence (data), deciding whether to reject or not reject the initial assumption.
-
Inferential Statistics
- Observational Studies and Experiments
- Sample and Population
- Population Distribution, Sample Distribution and Sampling Distribution
- Central Limit Theorem
- Point Estimates
- Confidence Intervals
- Introduction to Hypothesis Testing
-
Questions about data
- Do you have the right data for exploratory data anlaysis?
- Do you need other data?
- Do you have the right question?
- Choose Programming Language
- Python or R
- Mathematics and Linear Algebra
- Big Data
- Data Visualization
- Data Cleaning
- How to solve Problem?
- Machine Learning
- Type of algorithms performs the learning
- Supervised Learning
- Dataset has labels
- Classification
- Binary Classification
- Multiclass Classification
- Multilabel Classification
- Regression
- Linear Regression: Linear relationships between inputs and outputs
- Logistic Regression: Probability of a binary output
- Unsupervised Learning
- Dataset is unlabeled
- Semi-supervised Learning
- Dataset contains labeled and unlabeled
- Reinforcement Learning
- Learns from mistakes
- Agent take "actions" in an environment and see the "state" of environment with the features
- Excute actions in every state with different actions bring different "rewards"
- It learns "policy".
- Common Machine Learning Algorithms
- Linear Regression
- Logistic Regression
- Decision Tree
- SVM
- Naive Bayes
- kNN
- K-Means
- Random Forest
- Dimensionality Reduction Algorithms
- Gradient Boosting algorithms
- Deep Learning
- Common Library
- TensorFlow
- Keras
- Theano
- Pytorch
- sklearn
- Caffe
- Apache Spark
- Chainer
- Overfitting - the gap between training and test error is larger.
- Overfitting - the training error is smaller than test error.
- Overfitting - the larger hypothesis space, there is a higher tendancy for the model to overfit the training dataset.
- A model suffering from overfitting will have high variance and low bias.
- Simplify the model (fewer parameters)
- Simplify training data (fewer attributes)
- Constrain the model (regularization)
- Use ccross-validation
- Use Early stopping
- Build an ensemble
- Gather more data
- Underfitting - both the training and test error are larger.
- A model suffering from underfitting will have high bias and low variance.
- Increase model complexity (more parameters)
- Increase number of features
- Feature engineer should help
- Un-constrain the model (no regularization)
- Reduce or remove noise on the data
- Train for longer
- Improve the "Accuracy" of Machine Learning Model
- Add More Data
- Add More Features
- Feature Engineering
- Feature Selection
- Use Regularization
- Multiple Alogrithms
- Ensemble Methods
- Cross Validation
- Algorithm Tuning
- Bagging or Boosting