Skip to content

Commit

Permalink
docs: update titles of python templates (#293)
Browse files Browse the repository at this point in the history
it should be (more) in accordance with the JS templates
  • Loading branch information
vdusek authored Sep 12, 2024
1 parent f21edb4 commit 223908d
Show file tree
Hide file tree
Showing 9 changed files with 11 additions and 11 deletions.
6 changes: 3 additions & 3 deletions templates/manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
{
"id": "python-beautifulsoup",
"name": "python-beautifulsoup",
"label": "BeautifulSoup + HTTPX",
"label": "BeautifulSoup",
"category": "python",
"technologies": [
"beautifulsoup",
Expand Down Expand Up @@ -188,7 +188,7 @@
{
"id": "python-crawlee-beautifulsoup",
"name": "python-crawlee-beautifulsoup",
"label": "Start with Python Crawlee and BeautifulSoup",
"label": "Crawlee + BeautifulSoup",
"category": "python",
"technologies": [
"crawlee",
Expand Down Expand Up @@ -219,7 +219,7 @@
{
"id": "python-crawlee-playwright",
"name": "python-crawlee-playwright",
"label": "Start with Python Crawlee and Playwright",
"label": "Crawlee + Playwright + Chrome",
"category": "python",
"technologies": [
"crawlee",
Expand Down
2 changes: 1 addition & 1 deletion templates/python-beautifulsoup/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## BeautifulSoup and HTTPX template
## Python BeautifulSoup template

A template for [web scraping](https://apify.com/web-scraping) data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the [input schema](https://docs.apify.com/platform/actors/development/input-schema). The template uses the [HTTPX](https://www.python-httpx.org) to get the HTML of the page and the [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to parse the data from it. Enqueued URLs are available in [request queue](https://docs.apify.com/sdk/python/reference/class/RequestQueue). The data are then stored in a [dataset](https://docs.apify.com/platform/storage/dataset) where you can easily access them.

Expand Down
2 changes: 1 addition & 1 deletion templates/python-crawlee-beautifulsoup/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Crawlee with BeautifulSoup
## Python Crawlee with BeautifulSoup template

A template for [web scraping](https://apify.com/web-scraping) data from websites starting from provided URLs using Python. The starting URLs are passed through the Actor's input schema, defined by the [input schema](https://docs.apify.com/platform/actors/development/input-schema). The template uses [Crawlee for Python](https://crawlee.dev/python) for efficient web crawling, handling each request through a user-defined handler that uses [Beautiful Soup](https://pypi.org/project/beautifulsoup4/) to extract data from the page. Enqueued URLs are managed in the [request queue](https://crawlee.dev/python/api/class/RequestQueue), and the extracted data is saved in a [dataset](https://crawlee.dev/python/api/class/Dataset) for easy access.

Expand Down
2 changes: 1 addition & 1 deletion templates/python-crawlee-playwright/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Crawlee with Playwright
## Python Crawlee with Playwright template

A template for [web scraping](https://apify.com/web-scraping) data from websites starting from provided URLs using Python. The starting URLs are passed through the Actor's input schema, defined by the [input schema](https://docs.apify.com/platform/actors/development/input-schema). The template uses [Crawlee for Python](https://crawlee.dev/python) for efficient web crawling, making requests via headless browser managed by [Playwright](https://playwright.dev/python/), and handling each request through a user-defined handler that uses [Playwright](https://playwright.dev/python/) API to extract data from the page. Enqueued URLs are managed in the [request queue](https://crawlee.dev/python/api/class/RequestQueue), and the extracted data is saved in a [dataset](https://crawlee.dev/python/api/class/Dataset) for easy access.

Expand Down
2 changes: 1 addition & 1 deletion templates/python-empty/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Empty Python template
## Python empty template

Start a new [web scraping](https://apify.com/web-scraping) project quickly and easily in Python with our empty project template. It provides a basic structure for the [Actor](https://apify.com/actors) with [Apify SDK](https://docs.apify.com/sdk/python/) and allows you to easily add your own functionality.

Expand Down
2 changes: 1 addition & 1 deletion templates/python-playwright/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Playwright template
## Python Playwright template

## Included features

Expand Down
2 changes: 1 addition & 1 deletion templates/python-scrapy/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Scrapy template
## Python Scrapy template

A template example built with Scrapy to scrape page titles from URLs defined in the input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.

Expand Down
2 changes: 1 addition & 1 deletion templates/python-selenium/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Selenium & Chrome template
## Python Selenium & Chrome template

A template example built with Selenium and a headless Chrome browser to scrape a website and save the results to storage. The URL of the web page is passed in via input, which is defined by the [input schema](https://docs.apify.com/platform/actors/development/input-schema). The template uses the [Selenium WebDriver](https://www.selenium.dev/documentation/webdriver/) to load and process the page. Enqueued URLs are stored in the default [request queue](https://docs.apify.com/sdk/python/reference/class/RequestQueue). The data are then stored in the default [dataset](https://docs.apify.com/platform/storage/dataset) where you can easily access them.

Expand Down
2 changes: 1 addition & 1 deletion templates/python-standby/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Empty Python template
## Python standby template

Start a new [web scraping](https://apify.com/web-scraping) project quickly and easily in Python with our Standby project template. It provides a basic structure for the [Actor](https://apify.com/actors) with [Apify SDK](https://docs.apify.com/sdk/python/) and allows you to easily add your own functionality.

Expand Down

0 comments on commit 223908d

Please sign in to comment.