Another way of handling and testing dependencies #480
Replies: 2 comments
-
Thanks for sharing! That's definitely a viable alternative that would work well! It does introduce a bit of boilerplate in application code: an action case and reducer case to switch on for each reducer that is instrumented. This boilerplate is probably much smaller than the test boilerplate in a well-tested application, though 😄 And you can at the very least minimize the reducer logic with a higher order reducer: extension Reducer {
func analytics(
action: CasePath<Action, AnalyticsClient.Event>,
environment: (Environment) -> AnalyticsClient
) -> Self {
fatalError("unimplemented")
}
} |
Beta Was this translation helpful? Give feedback.
-
I've taken a more traditional approach for testing our analytics effects - I have what's effectively a traditional mock object, which lets you set expectations that a certain analytics effect will be received, and then later assert that those analytics were received. It also provides a factory function that creates a mock implementation of our analytics client which forwards events into this recorder object. As well as cleaning up boilerplate, it has one other advantage - instead of just collecting all of the events into an array and asserting on them at the end of your test, it lets you more precisely assert when an analytics event is triggered (although its not always possible to do this if you have a chain of actions triggered by effects which happen in a single synchronous call due to use of say a test or immediate scheduler). This isn't the exact implementation but it gives you an idea. First we have the recorder: class AnalyticsEventRecorder {
var expectedEvents: [Event] = []
var receivedEvents: [Event] = []
var expect(_ event: Event) -> {
expectedEvents.append(event)
}
var assertAllExpectedEventsReceived() {
XCTAssert(expectedEvents.isEmpty)
}
var record(_ event: Event) -> {
guard let nextExpected = expectedEvents.first else {
return
}
if nextExpected == event {
expectedEvents.removeFirst()
}
}
} Now a small helper function that creates a mock client: extension AnalyticsEventRecorder {
func makeClient() -> AnalyticsClient {
AnalyticsClient(track: { self.record($0) }
}
} In your test, you now create a recorder and pass a mock client into your environment, e.g.: func testActionThatTracksEvent() {
let recorder = AnalyticsEventRecorder() // normally create this as a test case instance variable
let store = Store(..., environment: SomeEnvironment(analytics: recorder.makeClient())
recorder.expect(.someEvent)
store.send(.someActionThatTriggersSomeEvent) {
// state changes
}
recorder.assertAllExpectedEventsReceived()
} I haven't checked any of the above compiles, its just an illustration. Our actual implementation is a bit more comprehensive and will also keep track of unexpected events and fail if it receives any. There's also a closure-based API that lets you expect a single event from code performed inside a closure and auto-asserts, before resetting. |
Beta Was this translation helpful? Give feedback.
-
In one of your videos analytics are discussed. Testing this for any case causes a lot of overhead since every time you need to create an array of received analytics events, implement it on environment, fill the array on events and assert the array.
For this I create a single action called
track(Event)
where Event is an enum.I write a single test to validate the
track
actionIn my other actions I just concatenate the track event to the return action
If I now want to test this I only need to test the receiving of the Action and no longer implement all the overhead around it
A downside of this approach is that the amount of receive events can quickly grow. For this I added an idea to allow the test store to terminate the testing of the chain of events once the part you are interested in has finished.
Beta Was this translation helpful? Give feedback.
All reactions