I am a data science graduate with over six years of experience in the software industry, expecting to complete my formal education in the summer 2024. I hold a Master’s degree in Computer Science from The University of Iowa and have built a strong foundation in data analysis, modeling, and automation throughout my career.
My expertise in data processing tools such as Python, R, SQL, and Jupyter has equipped me to develop and maintain robust data pipelines that ensure the quality and reliability of our analyses. Additionally, I have applied machine learning algorithms, including supervised and unsupervised learning, to solve complex problems and generate actionable insights.
I am familiar with agile software development methodologies and have worked closely with cross-functional teams to ensure our data-driven initiatives align with project timelines and goals.
Furthermore, my proficiency in SQL and database technologies allows me to validate and analyze data effectively. I am skilled in data extraction, web scraping, data wrangling, and data acquisition techniques, which enable me to efficiently collect, transform, and analyze data.
As a dedicated learner, I continuously seek opportunities to expand my knowledge and skills. I have attended various training sessions, workshops, and industry conferences to stay up-to-date with the latest advancements in data science and machine learning.
Beyond my technical expertise, I am passionate about staying current with the latest trends and developments in data science. I actively engage in online communities and events to connect with and learn from other professionals in the field.
- Programming and Script Language: Javascript, Python, R, SQL, Bash, Java, HTML, CSS
- Automation Testing: Selenium, SeleniumBase, Cypress, JMeter, and Playwright
- Working with Different Data Types: JSON, CSV, EXCEL, Text, XML, SQL, Parquet, Avro, ORC
- Version Controlling, Container Virtualization: Docker
- Databases: Postgres, SQL Server, MySQL, SQLite, and MongoDB
- ETL Database Model Development: Carry out new procedures and create various data warehouses
- Data Warehouse, Data Lakes, Data Pipelines, Automation
- Gather Requirements from Business Analysts
- Develop Physical Data Models Using Erwin
- Create DDL Scripts to Design Database Schema and Database Objects
- Cloud Computing: AWS, Microsoft Azure AZ-900, Microsoft Azure DP-900, Microsoft Azure AI-900
- Perform Database Defragmentation and Optimize SQL Queries
- Improving Database Performance and Loading Speed
- Framework: PySpark, and Hadoop
- Data Visualization: Tableau
- Operating Systems: Windows, Linux (macOS, Ubuntu, Redhat)
🔹Fun fact 👉 01000011 01101111 01100100 01101001 01101110 01100111 00100000