Add unit tests for browser-capture service#149
Conversation
- Adds `src/services/browser-capture.test.ts` - Mocks `puppeteer-core`, `node:fs`, and `node:os` - Tests `startBrowserCapture`, `stopBrowserCapture`, `isBrowserCaptureRunning`, and `hasFrameFile` - Verifies screencast frame handling and file writing - Increases code coverage for critical headless browser functionality
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @Dexploarer, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the test coverage for the browser-capture service by introducing a dedicated suite of unit tests. These tests ensure the reliable operation of browser capture functionalities, leveraging mocking techniques to isolate the service logic from external dependencies like puppeteer-core and the file system, thereby providing robust and efficient validation of the service's behavior. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a solid set of unit tests for the browser-capture service, effectively mocking dependencies like puppeteer-core to validate browser interactions. The tests cover the primary happy paths for starting, stopping, and processing screencast frames. My review includes two suggestions for improvement. The first is a medium-severity recommendation to split a test case into two for better clarity and isolation. The second is a high-severity suggestion to add a new test case for an error scenario in the frame handling logic. The current implementation might cause the screencast to stall on a file write error, and a dedicated test would highlight this potential issue and improve overall test coverage.
|
|
||
| expect(writeFileSync).toHaveBeenCalledWith(FRAME_FILE, expect.any(Buffer)); | ||
| expect(mockCDPSession.send).toHaveBeenCalledWith("Page.screencastFrameAck", { sessionId: 123 }); | ||
| }); |
There was a problem hiding this comment.
The implementation of the Page.screencastFrame event handler in browser-capture.ts uses a try...catch block that silently ignores errors. If an error occurs during writeFileSync, the Page.screencastFrameAck is not sent, which could cause the screencast to stall.
To improve test coverage and highlight this potential issue, please add a test case for this error scenario.
Here is a suggested test to add:
it("should not send screencast frame ack on write error", async () => {
await startBrowserCapture({ url: "http://example.com" });
// Simulate an error during file write
vi.mocked(writeFileSync).mockImplementation(() => {
throw new Error("EIO: i/o error");
});
// Find and trigger the screencast frame event handler
const onCall = mockCDPSession.on.mock.calls.find(
(call: any[]) => call[0] === "Page.screencastFrame",
);
expect(onCall).toBeDefined();
const onCallback = onCall![1];
const frameData = {
data: Buffer.from("test-image-data").toString("base64"),
sessionId: 123,
};
// The error should be caught internally, so the call should not throw
await expect(onCallback(frameData)).resolves.toBeUndefined();
// Verify that the frame was NOT acknowledged due to the error
expect(mockCDPSession.send).not.toHaveBeenCalledWith(
"Page.screencastFrameAck",
{ sessionId: 123 },
);
});| it("should check if frame file exists", () => { | ||
| vi.mocked(existsSync).mockReturnValue(true); | ||
| expect(hasFrameFile()).toBe(true); | ||
| expect(existsSync).toHaveBeenCalledWith(FRAME_FILE); | ||
|
|
||
| vi.mocked(existsSync).mockReturnValue(false); | ||
| expect(hasFrameFile()).toBe(false); | ||
| }); |
There was a problem hiding this comment.
This test case combines two distinct scenarios: when the frame file exists and when it doesn't. For better clarity, readability, and test isolation, it's a best practice to separate these into two distinct it blocks. This makes it easier to identify which specific scenario fails if the test breaks.
it("should return true if frame file exists", () => {
vi.mocked(existsSync).mockReturnValue(true);
expect(hasFrameFile()).toBe(true);
expect(existsSync).toHaveBeenCalledWith(FRAME_FILE);
});
it("should return false if frame file does not exist", () => {
vi.mocked(existsSync).mockReturnValue(false);
expect(hasFrameFile()).toBe(false);
expect(existsSync).toHaveBeenCalledWith(FRAME_FILE);
});
Added unit tests for
src/services/browser-capture.tsto improve test coverage. The tests mockpuppeteer-coreto verify browser interactions without launching a real browser instance.Verified locally using
bun testwithbun:testmocks, then converted to standardvitestsyntax for the codebase.PR created automatically by Jules for task 5673411032727394941 started by @Dexploarer