Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for tag filtering #287

Open
natamox opened this issue Jul 15, 2023 · 14 comments
Open

Support for tag filtering #287

natamox opened this issue Jul 15, 2023 · 14 comments

Comments

@natamox
Copy link

natamox commented Jul 15, 2023

Because the whole thing is really too big, more than 70 GB. For example, I only want to grab javascript or other tags, how to do it, thank you

@kelson42
Copy link
Contributor

This is not possible for now. But should be possible, though maybe not that trivial. Main problem of the moment is that the scraper can not scrape recent dumps, because StackExchange does not provide them anymore.

@rgaudin rgaudin changed the title How do I grab a part of the content on the stackoverflow website Support for tag filtering Jul 17, 2023
@rgaudin
Copy link
Member

rgaudin commented Jul 17, 2023

@natamox I renamed the ticket. What you'd want is to be able to supply a list of tags and get a ZIM with just the listed tags' content, right ?
It would be a great feature indeed, but it's not implemented yet.

As @kelson42 said, sotoki's future is unclear ATM because Stack Exchange stopped providing XML dumps.

@rgaudin rgaudin removed the question label Jul 17, 2023
@natamox
Copy link
Author

natamox commented Jul 19, 2023

@rgaudin @kelson42 Well, thanks for your replies. Now I have another idea. I already have a 70GB stackoverflow file. Can I use a tool to unpack the zim file and delete some content according to my needs before repacking it.

@kelson42
Copy link
Contributor

@natamox We never do that because this would imply URL rewriting in all HTML pages to avoid dead links.

@rgaudin
Copy link
Member

rgaudin commented Jul 19, 2023

Just to be clear, following my previous comment we've seen that SE has published the Dump and promised to continue so the future of sotoki is not at stake anymore.

As for extracting and repacking, it's possible but since we only bundle the HTML version of questions without metadata, you'd have to parse the HTML of every entry to find the tags…
Also, the tag-less navigation would need to be fixed and the related links would not work either.

You're better off implementing this ticket, way less work and outcome is clear and solid 😉

@kelson42
Copy link
Contributor

kelson42 commented Jul 19, 2023

@rgaudin @benoit74 I think this is not only a valid feature request, but actually a pretty good one. We could for example do one ZIM per mainstream programming language. How complex would that be?

@natamox
Copy link
Author

natamox commented Jul 19, 2023

@rgaudin @kelson42 Sounds really good, I thought I'd give it a try. . . . Although I may not be very good at this

@rgaudin
Copy link
Member

rgaudin commented Jul 19, 2023

It's quite easy:

  • cli param to capture wanted tags ; parse to list
  • in tags.py, skip if TagName not in the list (-> tag not recorded to db)
  • in posts.py in both passes parsers, list of tags should be filtered to requested one instead of just retrieved from XML. Cleaner approach is too check if in DB.tags_ids.inverse
  • about template should mention that it's restricted to this list.
    And I think that should do it

@benoit74
Copy link
Collaborator

benoit74 commented Jul 19, 2023

Don't you need something to fix broken links to questions that have been filtered out? You might for instance redirect these links to a static page which indicate that this question has not be scraped due to tag filters, like I did for iFixit. Or is this already handled in the scraper / not wished?

@natamox
Copy link
Author

natamox commented Jul 19, 2023

when i run

env/bin/pip install -r requirements.txt

something went wrong

Collecting libzim<3.0,>=2.1.0 (from zimscraperlib<3.0,>=2.1->-r requirements.txt (line 3))
  Using cached libzim-2.1.0.tar.gz (8.3 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Preparing metadata (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [2 lines of output]
      [!] The libzim library cannot be found.
      Please verify it is correctly installed and can be found.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Ok, it has been solved, I first executed

./env/bin/python src/sotoki/dependencies.py

Reported an error that zimscraperlib could not be found, I can do it after

./env/bin/pip install zimscraperlib

@kelson42
Copy link
Contributor

@natamox please open another ticket

@rgaudin
Copy link
Member

rgaudin commented Jul 19, 2023

With your version of python and your architecture ; looks like there's no libzim wheel

@natamox
Copy link
Author

natamox commented Jul 19, 2023

With your version of python and your architecture ; looks like there's no libzim wheel

Thanks, it seems to work now, I ran setup.py and it said zimscraperlib was missing, downloaded it just fine, then it worked.
Python version is 3.11.4; architecture is arm64

@rgaudin
Copy link
Member

rgaudin commented Jul 20, 2023

Don't you need something to fix broken links to questions that have been filtered out? You might for instance redirect these links to a static page which indicate that this question has not be scraped due to tag filters

We already remove links to questions that are not in the DB. What's not handled is the a/{aId} shortcut because we don't store answers in the DB. For valid links it's not a problem because we create a redirect elsewhere but for missing target this would lead to a dead link.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants