We will learn how moving the source of truth for the data to the server changes the state shape and the reducers in our app.
Now I can understand why we persist list of data as Object in previous lectures :)
I don't fully understand the reasoning behind fetching the id's by filter vs. fetching all id's and then filtering on the client. For one thing the app is doubling the amount of data it needs to load since All = Active + Completed
todos. Furthermore filtering on the client is faster and more responsive. This is just my way of reasoning about the problem and I could be missing something, could you perhaps shine some light on this particular implementation and possible benefits?
byId reducer can be implemented with reduce:
const byId = (state = {}, action)=> {
switch (action.type) {
case "SET_TODOS":
return action.todos.reduce((prev, current)=> {
prev[current.id] = current;
return prev;
}, state);
default:
return state;
}
};
~~~
I don't fully understand the reasoning behind fetching the id's by filter vs. fetching all id's and then filtering on the client. For one thing the app is doubling the amount of data it needs to load since
All = Active + Completed
todos. Furthermore filtering on the client is faster and more responsive. This is just my way of reasoning about the problem and I could be missing something, could you perhaps shine some light on this particular implementation and possible benefits?
I wondered about this, too. I think continuing to have an "all" filter undermines and confuses the intention of the lesson a bit.
The reason Dan gives for filtering on the server, copied from the transcript, is as follows: "If we have thousands of todos on the server, it would be impractical to fetch them all and filter them on the client." That directly answers the question. Presumably when dealing with that magnitude of data you wouldn't have an option to view all the todos at once. Even if there were an "all" option it would probably involve pagination, which is itself a kind of filter.
But we only have three todos, and we do have a legacy option to view them all at once, so we might as well just continue fetching them all and filtering client-side. In order to illustrate the point better I think it would have been clearer to remove the "all" filter entirely. Then it would no longer be possible to figure out the contents of one array from the contents of another, because there wouldn't be overlap, and this implementation would make more sense within the context of our tiny dataset.
I've been trying to figure out why we bother having separate arrays for each filter type at all, if we're just going to fetch again whenever the filter changes. We could just as well have a single array of ids, which just reflects whatever filter was passed most recently to the server. I have to assume that the reason is to take advantage of caching, as Dan demonstrated, so that if you revisit a filter you instantly see what the app already loaded for that filter.
byId reducer can be implemented with reduce:
True, but that implementation mutates state
by using it as the initial value for reduce
and then changing its properties directly. You could start with { ...state }
instead like Dan does. However, mutation (even to the shallow degree that Dan's implementation uses it) can be avoided entirely by using reduce
in this way instead:
const byId = (state = {}, action) => {
switch (action.type) {
case 'RECEIVE_TODOS':
return action.response.reduce((currentState, todo) => ({
...currentState,
[todo.id]: todo
}), state);
default:
return state;
}
};
Even though this implementation starts with state
and not a copy, it doesn't matter because each iteration within reduce
returns a new state
object built by destructuring the previous one and then appending the updated data.