Skip to content
View MekWiset's full-sized avatar
πŸ€“
Nerding
πŸ€“
Nerding

Block or report MekWiset

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MekWiset/README.md

Hi there πŸ‘‹, I'm Mek

Banner

As a passionate Junior Data Engineer, I specialize in leveraging data to drive impactful insights and solutions. I thrive on solving complex data challenges and enjoy working with diverse technologies to optimize data pipelines and workflows. With a strong foundation in data engineering tools and methodologies, I am particularly interested in exploring advancements in data science, cloud computing, and big data analytics.

My goal is to contribute to innovative projects that harness the power of data to create meaningful outcomes and drive progress in various industries.

πŸ”— Connect with me:

LinkedIn Gmail Medium

πŸ’» Languages and Tools:

Python Apache Airflow Apache Kafka Pandas Apache Spark Docker AWS Azure Google Cloud Postgres ApacheCassandra MongoDB macOS Visual Studio Code Notion

πŸ“ Recent Medium Articles

Recent Article 0


πŸš€ Featured Projects

  • ℹ️ Description: Big Data Migration from GCP to Azure.
  • πŸ† Achievements: Successfully migrated over 19 million rows of data from Google Cloud Storage to Azure Data Lake, ensuring data integrity with zero data loss.
  • 🎯 Technologies used:
    • Processing Tools: PySpark
    • GCP Services: Google Cloud Storage (GCS), BigQuery
    • Azure Services: Azure Data Factory, Azure Data Lake Storage Gen 2, Databricks, Key Vault
    • Others: Docker
  • ℹ️ Description: Building a Data Lakehouse using the Medallion architecture.
  • πŸ† Achievements: Developed a scalable Data Lakehouse architecture using the Medallion framework, facilitating efficient data storage, processing, and analysis with seamless integration across Azure services.
  • 🎯 Technologies used:
    • Processing Tools: DBT (Data Build Tool)
    • Azure Services: Azure SQL Database, Azure Data Lake Storage Gen 2, Databricks, Azure Key Vault
  • ℹ️ Description: Real-time data ingestion to Cassandra using Airflow, Kafka, and Spark.
  • πŸ† Achievements: Engineered a robust real-time data streaming pipeline, enabling low-latency data ingestion into Cassandra and ensuring consistent data flow and processing across multiple platforms.
  • 🎯 Technologies used:
    • Processing Tools: PySpark
    • Orchestration Tools: Airflow, Kafka, Zookeeper
    • Monitoring: Confluent
    • Storage: Cassandra
    • Others: Docker

Pinned Loading

  1. LiquorSales_Data_Migration_Pipeline LiquorSales_Data_Migration_Pipeline Public

    Big Data Migration Across Cloud Platforms.

    Python 1

  2. Medallion_DataLakehouse Medallion_DataLakehouse Public

    Building a Data Lakehouse using the Medallion architecture.

    Jupyter Notebook 3

  3. PostgresToMongoDB_migration_project PostgresToMongoDB_migration_project Public

    Migrate data from Postgres Database to MongoDB.

    Python 2

  4. Realtime_Data_Streaming Realtime_Data_Streaming Public

    Real-time data ingestion to Cassandra using Airflow, Kafka, and Spark.

    Python 1