You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It takes a long time to fetch and scrape the problem each time. It also opens up the project to uncaught "runtime" errors (i.e., bad problem formatting after scrape).
Instead, we could cache problems. The very first time a user requests a problem with a given ID, we could query and see if we've stored it already (adds milliseconds to their request) and after it's sent to the user, store the problem information for subsequent calls.
Reddit users own their post so this one is trickier, but I think it's okay to save and reuse the posts
CodingBat copyright belongs to Nick Parlante, so I'll have to reach out to him/the website before I proceed
Design
The changes are minor. In each handler (sans coding bat), we include the following logic
Check DB for post ID (--if user does not provide a post ID, check random ID against the DB)
2a. If post exists, return cached post
2b. If post does not exist, query site, create payload, save to DB, and return payload
Occasionally, a Cron job should probably clean and validate problems.
As for design, I think a NoSQL design is fine here since we'll only ever return a post at a time, and the post format could change. MongoDB makes the most sense out of current choices.
The text was updated successfully, but these errors were encountered:
It takes a long time to fetch and scrape the problem each time. It also opens up the project to uncaught "runtime" errors (i.e., bad problem formatting after scrape).
Instead, we could cache problems. The very first time a user requests a problem with a given ID, we could query and see if we've stored it already (adds milliseconds to their request) and after it's sent to the user, store the problem information for subsequent calls.
Feasibility
Design
The changes are minor. In each handler (sans coding bat), we include the following logic
2a. If post exists, return cached post
2b. If post does not exist, query site, create payload, save to DB, and return payload
Occasionally, a Cron job should probably clean and validate problems.
As for design, I think a NoSQL design is fine here since we'll only ever return a post at a time, and the post format could change. MongoDB makes the most sense out of current choices.
The text was updated successfully, but these errors were encountered: