Skip to content

Commit

Permalink
Merge branch 'Mind-Benders:blogs' into blogs
Browse files Browse the repository at this point in the history
  • Loading branch information
Saurabha999 authored Mar 10, 2024
2 parents 4d9e6e7 + 5be8101 commit f8f0b28
Show file tree
Hide file tree
Showing 39 changed files with 705 additions and 82 deletions.
3 changes: 3 additions & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
API_KEY=066a2ed9cfc332fb09112a9059ccdbf7
APPLICATION_ID=QFLBKAUEYJ
index=tcetopensource
2 changes: 2 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ko_fi: tcetopensource
custom: [buymeacoffee.com/tcetopensource]
9 changes: 3 additions & 6 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,12 @@ on:
pull_request:
branches:
- main
- staging
- blogs
workflow_dispatch:

permissions:
contents: read

env:
API_KEY: ${{secrets.API_KEY}}
APPLICATION_ID: ${{secrets.APPLICATION_ID}}
index: ${{secrets.index}}

jobs:
build:
Expand All @@ -21,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v3
with:
Expand Down
29 changes: 0 additions & 29 deletions .github/workflows/production.yml

This file was deleted.

1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
.env.development.local
.env.test.local
.env.production.local
.env

npm-debug.log*
yarn-debug.log*
Expand Down
28 changes: 27 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,13 @@ One of the key visions of the TCET Open Source Organization is to provide users

The **Documentation Website** will act as our Organization's website. It will consist of the *documentation of all projects* undertaken by TCET Open Source. The website will also have dedicated *blogs contributed by the community* covering various different interests.

## Hacktoberfest
### Powered by:
<img width="311" alt="mlh-logo-color" src="https://github.com/tcet-opensource/hacktober-fest/assets/55846983/d5ccae96-86a7-4fed-8f00-e9f1d0aa5cac">

### How to contribute
Read our [workflow](https://opensource.tcetmumbai.in/docs/resources/workflows/external-workflow/) guide, and have a look at issues marked with the <code>Hacktoberfest</code> tag on it. Do not forget to read the rest of the README. For serious doubts, contact the project maintainers on our discord server.

## Cloning

To clone the repository copy paste the following command in your system's terminal.
Expand All @@ -30,6 +37,25 @@ Run the project live on your local system to make changes and check the updates.
npm run start
```

## .env File Setup

In addition to cloning the repository and setting up the project as described above, you will need to create a .env file to configure certain environment variables. These variables are essential for the proper functioning of the Documentation Website. Here's how you can set up the .env file:

+ Create the .env file: First, create a file named .env in the root directory of your cloned repository.

+ Add API Key and Application ID: Inside the .env file, you'll need to set two important variables: API_KEY and APPLICATION_ID. These variables are used for connecting to external services or APIs that the website may rely on.

```
API_KEY=<your_api_key>
APPLICATION_ID=<your_id>
index=tcetopensource
```

+ API_KEY should be replaced with the actual API key required for your project. Make sure to obtain the API key from the respective service or provider you're using.

+ APPLICATION_ID should be replaced with the unique application ID required for your project. Again, obtain this ID from the relevant service or provider.


<div align="center">
<h3> Connect with us<a href="https://gifyu.com/image/Zy2f"><img src="https://github.com/milaan9/milaan9/blob/main/Handshake.gif" width="50px"></a>
</h3>
Expand All @@ -39,4 +65,4 @@ npm run start
<a href="https://discord.gg/r7ZhAREg2M" target="_blank"><img alt="Discord" width="40px" src="https://cdn-icons-png.flaticon.com/512/5968/5968756.png"></a> &nbsp&nbsp&nbsp
<a href="mailto:opensource@tcetmumbai.in" target="_blank"><img alt="Gmail" width="40px" src="https://cdn-icons-png.flaticon.com/512/5968/5968534.png"></a> &nbsp&nbsp&nbsp
<a href="https://www.linkedin.com/company/tcet-opensource/" target="_blank"><img alt="LinkedIn" width="40px" src="https://cdn-icons-png.flaticon.com/512/3536/3536505.png"></a> &nbsp&nbsp&nbsp
</p>
</p>
Binary file added blog/2023-10-06-linux-cli/history.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
155 changes: 155 additions & 0 deletions blog/2023-10-06-linux-cli/index.mdx

Large diffs are not rendered by default.

Binary file added blog/2023-10-06-linux-cli/kernel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added blog/2023-10-08-web-crawling/benefits.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
139 changes: 139 additions & 0 deletions blog/2023-10-08-web-crawling/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
---
slug: web-crawling
title: Web Crawling, A Beginner’s Perspective on Data Extraction
authors: [mahima]
tags: [web crawling, data extraction, automation]
description: In this blog, you will explore the fundamentals of web crawling and how you can get started with your own data extraction projects.
keywords: [web crawling, BeautifulSoup, scrapy, data extraction]
---

> _Web crawling, also known as web scraping, is the process of automatically extracting data from websites.
It allows us to gather valuable information from various sources on the internet efficiently and in a structured manner.
In this blog, we’ll explore the fundamentals of web crawling and how you can get started with your own data extraction projects._

<br />

<!--truncate-->

import webcrawl from "./web_crawl.png"
import benefits from "./benefits.png"
import process from "./process.png"

<figure>
<img src={webcrawl} style={{border: "2px solid grey"}}/>
</figure>

## What if I told you that web crawling could come to your rescue even in unexpected work scenarios? 🤔

Imagine you’re on a relaxing weekend, enjoying your favorite Netflix series, when suddenly your boss calls with an urgent task.

Let’s say your boss needs a comprehensive analysis of competitors’ pricing for an upcoming project. Manually collecting this data from various websites would be time-consuming and error-prone.
However, with web crawling, you can **automate the data extraction process**, quickly gathering pricing information from multiple sources and generating a detailed report.
Not only does this save you hours of manual work, but it also **ensures accuracy** and **provides valuable insights** for your boss.

Web crawling can be a game-changer in various work scenarios. Need to gather customer reviews for a product launch?
Web crawling can swiftly scrape reviews from e-commerce platforms, allowing you to analyze sentiment and make data-driven decisions.
Want to monitor industry trends or track news updates? Web crawling can continuously fetch relevant information from news websites, keeping you up to date and enabling you to stay ahead of the competition.

## Benefits

Let's have a look at the various benefits of web crawling that have made it a popular concept for seamless integration within large-scale enterprises.

<figure>
<img src={benefits} style={{border: "2px solid black"}}/>
<center><figcaption>Benefits of Web Crawling</figcaption></center>
</figure>

## Process of Web Crawling

<figure>
<img src={process} style={{border: "2px solid black"}}/>
<center><figcaption>Process of Web Crawling</figcaption></center>
</figure>

<details>
<summary><b>💡 Discovery</b></summary>
<div>
In the discovery stage, a web crawler starts by identifying set of seed URLs. These seed URLs are the starting points from which the crawler begins exploring the web.
They can be manually provided or generated programmatically. The crawler then extracts the links present on the web page of the seed URLs and adds them to a queue for further processing.
</div>
</details>

<details>
<summary><b>🕷️ Crawling</b></summary>
<div>
The crawling stage involves visiting the URLs in the queue and retrieving the corresponding web pages.
The crawler sends HTTP requests to the web servers hosting the pages and receives HTTP responses in return.
The responses typically include HTML content, but they can also include other types of files such as images, CSS files, or JavaScript files. The crawler parses the HTML content to extract links and other relevant information for subsequent crawling.
</div>
</details>

<details>
<summary><b>⛏️ Fetching</b></summary>
<div>
During the fetching stage, the crawler retrieves the content of the web pages by downloading them from the web servers.
This process involves downloading the HTML and any associated files, such as images or scripts, required to render the page correctly.
The fetched content is then stored for further processing.
</div>
</details>

<details>
<summary><b>💻 Rendering</b></summary>
<div>
Rendering refers to the process of processing and executing JavaScript code present on web pages. Some web pages heavily rely on JavaScript to load and display content dynamically.
Modern web crawlers often include a rendering engine that can execute JavaScript code, allowing the crawler to handle pages that rely on client-side rendering.
</div>
</details>

<details>
<summary><b>📑 Indexing</b></summary>
<div>
Once the web pages are fetched and rendered, the crawler can extract the desired data from the pages. This data can include text content, metadata, links, or any other relevant information.
The extracted data is typically processed and stored in an organized manner, such as in a database or an index, for further analysis or retrieval.
</div>
</details>


:::info
It’s important to note that web crawling is an iterative process.
As the crawler discovers new links during the crawling stage, it adds them to the queue for subsequent crawling, continuing the process of discovery, crawling, fetching, rendering, and indexing for a broader coverage of the web.
:::

## Getting Started with Web Crawling

- **Identify Your Data Needs**: Determine the specific information you want to extract from websites.
It could be product details, contact information, news articles, or any other relevant data.

- **Choose a Web Crawling Tool**: There are various web crawling frameworks and libraries available, such as BeautifulSoup and Scrapy in Python.
Select a tool that aligns with your programming language and project requirements.

> _You can learn more about Python Scrapy [**here**](https://docs.scrapy.org/en/latest/intro/tutorial.html)_
- **Understand the Website Structure**: Familiarize yourself with the target website’s structure. Identify the HTML elements that contain the data you need, such as class names, IDs, or specific tags.
Some key steps to follow here may include:
1. Inspect the web page

2. Explore the HTML Elements

3. Identify unique Identifiers

```html
Example: <div class="product-name">
```

- **Write the Crawling Code**: Utilize your chosen web crawling tool to write code that navigates through the website, locates the desired data, and extracts it.
This involves sending HTTP requests, parsing HTML content, and selecting the relevant elements.

- **Handle Website-Specific Challenges**: Some websites may implement anti-crawling measures like CAPTCHA or rate limiting.
Implement strategies like rotating IP addresses or adding delays in your crawling code to handle such challenges.

## Ethical Considerations

While web crawling can be a powerful tool for data extraction, it’s important to respect website owners’ terms of service and adhere to ethical guidelines. Always ensure that your crawling activities are legal and ethical.
Be mindful of any website-specific crawling policies and consider reaching out to website owners for permission when necessary.

## Conclusion

Web crawling opens up a world of possibilities for data extraction and analysis. By automating the process of gathering data from websites, you can save time and collect valuable insights. Armed with the knowledge from this beginner’s guide, you’re ready to embark on your web crawling journey.
Remember to stay ethical, explore different tools, and continue learning as you dive deeper into the exciting world of web crawling.

Binary file added blog/2023-10-08-web-crawling/process.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added blog/2023-10-08-web-crawling/web_crawl.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added blog/2023-10-1-cloudgaming/image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added blog/2023-10-1-cloudgaming/image2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added blog/2023-10-1-cloudgaming/image3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
88 changes: 88 additions & 0 deletions blog/2023-10-1-cloudgaming/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
---
slug: Cloud-gaming
title: Exploring the Future of Gaming- Cloud Gaming Unveiled
authors:
- name: Om Hinge
title: Cloud Enthusiast & Gamer
url: https://github.com/Aisu2635
image_url: https://github.com/Aisu2635.png
tags: [cloud, gaming, cyberpunk, nvidia]
---

# Exploring the Future of Gaming: Cloud Gaming Unveiled

Hey there, fellow gamers! Today, we're diving into the fascinating world of **cloud gaming**, a technology that's changing the way we play and enjoy our favorite titles. In this article, we'll break down the concept of cloud gaming, its evolution, and its promising future.

## Introduction to Cloud Gaming

Imagine playing high-quality video games without the *need* for expensive gaming hardware or the hassle of *downloading and installing massive game files*. That's the magic of cloud gaming! It's like Netflix for gamers, where you can instantly access and play games over the internet without worrying about hardware requirements.

Most modern games demand a hefty amount of storage space and powerful hardware to run smoothly. Think about titles like Call of Duty's Warzone, which takes up over 1000GB of storage. To play these games with the best experience, you'd need a high-end PC or gaming console. But what if there was a more affordable alternative?

![Cloud Gaming Demonstration by playing Cyberpunk 2077 on mobile](image.png)

## How Cloud Gaming Works

![Cloud Gaming flow](image2.png)

Cloud gaming operates within the realm of cloud computing. Instead of storing game files on your local device, they're hosted and processed on powerful remote servers in data centers. Here's how it works in a nutshell:

+ **Remote Servers**: Powerful servers host and run the games, eliminating the need for you to download and install them on your device.

+ **Streaming Gameplay**: Similar to streaming services like Netflix, cloud gaming sends a video stream of the gameplay over the internet to your device.

+ **Input Control**: Your inputs (the buttons you press and the moves you make) are sent to the server, where the game responds accordingly. This allows you to play even on low-end devices.

While cloud gaming offers incredible convenience, it's important to note that it can introduce some input lag, depending on factors like your internet connection stability and the distance between you and the server.

*Cloud Gaming is one of the Best Examples to showcase the power of Cloud Computing.*

## The History and Future of Cloud Gaming

In the past, cloud gaming faced numerous challenges, including network issues. Google's attempt with Google Stadia was ambitious but struggled due to connectivity problems. Other giants like Amazon and Microsoft also entered the arena with Amazon Luna and Xbox cloud gaming.

However, the future of cloud gaming looks bright, especially in countries like India. Gaming is growing rapidly, and cloud gaming provides an affordable platform for those unable to invest in high-end gaming hardware. The potential to earn rewards through gaming is also on the rise, further boosting its popularity.

The primary challenge facing cloud gaming today is network-related issues, but providers are actively working on solutions to make it accessible to more users.

We can say Cloud Gaming was just the first step for the cloud computing service to even non-tech users.
Google is developing & testing Cloud Quantum computing so that one day everyone can access the Incredible power of Quantum computers without the need for a quantum rig, which might be bigger than most of our houses.

![cloud quantum computing](image3.png)
## Advantages and Drawbacks of Cloud Gaming

Cloud gaming offers several advantages, including:

- **Universal Platform**: You can play games on any device with an internet connection, from consoles to smartphones.

- **Cost-Efficiency**: No need for expensive hardware, as the processing is done on remote servers.

- **Portability**: Play on the go without worrying about installation and setup.

However, it's not without its drawbacks:

- **Internet Dependency**: A stable internet connection is crucial for a smooth experience.

- **Input Lag**: Some games may suffer from input delay due to server processing.

- **Limited Awareness**: Many people are still unaware of cloud gaming, and few providers exist compared to traditional gaming options.

## Current Status of Cloud Gaming

Several cloud gaming services are making waves in the industry:

- [x] **Nvidia GeForce Now**: This service is known for its low system requirements, compatibility with various devices, and a free trial period.
- [x] **XBox Game Cloud**: Known for Various game titles for free to play in the subscription of Game Pass.
- [x] **JioGames Cloud**: Building the Cloud gaming culture & infrastructure in India at a reasonable rate.

## Is Cloud Gaming Worth It?

While cloud gaming shows immense potential, it's still in its early stages. Input lag and connectivity issues can be frustrating, especially for competitive gamers. Traditional gaming setups remain a popular choice. However, as technology advances, cloud gaming has the potential to become the future of gaming in the coming decade.

In conclusion, cloud gaming is a game-changer with the potential to democratize gaming by making it accessible to more players. As it evolves and overcomes its current challenges, we can expect cloud gaming to reshape the gaming landscape in the near future.

## References:

- [Nvidia GeForce Now](https://www.nvidia.com/en-us/geforce-now/)
- [PlayStation Now](https://en.wikipedia.org/wiki/PlayStation_Now)
- ~Google Stadia~ (closed right now)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit f8f0b28

Please sign in to comment.