Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decide the API #1

Open
methyl opened this issue Oct 1, 2020 · 8 comments
Open

Decide the API #1

methyl opened this issue Oct 1, 2020 · 8 comments

Comments

@methyl
Copy link
Contributor

methyl commented Oct 1, 2020

Current proposals:

  1. Drop-in GenServer replacement, no additional handlers. Every handle_* callback that changes the state is hooked to push the state to clients via Phoenix Channel. This is the current implementation.

  2. Maintain separate state that gets updated and synchronized via handle_action/handle_mount callbacks.

@harmon25
Copy link

harmon25 commented Oct 4, 2020

Leaning more towards option 1 after having played with it...
Have not actually attempted 2 yet though, it may be more ergonomic.

A third option is a middle ground, that uses similar implementation to 1 but, just different names for the handlers? allow for typical gen_server handle_call, and do magic with a handle_(action/mount/update?)?

@webdeb
Copy link

webdeb commented Oct 6, 2020

Hi guys I would't override the handle_call behaviour of the underlying gen_server, as this could cost later.

Instead keep the functionality as simple as possible via function composition, like the synchronise function. And maybe add a macro to wrap the native handle_call + synchronize into a nice dsl handle_action(action, state), do: state

If anyone needs more control, you can still use the low-level api of handle_call and synchronise.

@harmon25
Copy link

harmon25 commented Oct 6, 2020

It does seem the current implementation has a bug in the handle_call wrapper...
When defining a catch all handle_call/3 to handle invalid actions/params, it seems to be defined first by the macro (I think), which causes all the other handle_call/3 methods to be ignored, and just returns the same state each time.

def handle_call(_, _, state), do: {:reply, :ok,  state}

This would be easier to catch/avoid of we were using handle_action or something instead of handle_call/3.

@methyl
Copy link
Contributor Author

methyl commented Oct 8, 2020

Implementation-wise, separate callbacks are way easier to achieve, but there is one specific reason I did not go for it.

It makes it harder to implement a state container where you receive messages from some external process, for example

defmodule JobLogs do
  use LiveData
  
  def init(%{"job_id" => id}}) do
    Phoenix.PubSub.subscribe(JobRunner.PubSub, "job:#{id}")
    {:ok, []}
  end
  
  def handle_info({:stdout, _job_id, logs}, state) do
    {:noreply, state ++ String.split(logs, "\n"))}
  end
end

Here I subscribe to a topic via Phoenix.PubSub and what it means is that I receive a message that can be handled via handle_info. With a drop-in API like here, you don't need to do anything else than handle_info and return modified state, while with separate callbacks (which I also tested) it was super annoying to do something like this:

defmodule JobLogs do

  def handle_action({:append_to_logs, state}) do
    {:noreply, state ++ String.split(logs, "\n"))}
  end
  
  def handle_info({:stdout, _job_id, logs}, state) do
    call_action({:append_to_logs, logs})
    {:noreply, state}
  end
end

I think the indirection added is not worth it, it's cleaner and easier to operate with other systems when I can just handle_info and call it a day. I'm sure it's possible to get the implementation to the state where it works reliably and is bug free.

For what it's worth, it's also the case for LiveView that I can handle_info and assign state from there:

defmodule DemoWeb.ThermostatLive do
  use Phoenix.LiveView
  ...

  def handle_info(:update, socket) do
    Process.send_after(self(), :update, 30000)
    {:ok, temperature} = Thermostat.get_reading(socket.assigns.user_id)
    {:noreply, assign(socket, :temperature, temperature)}
  end
end

It's definitely good idea to peek their implementation, maybe we can figure out what's the best of both worlds.

If you have any other ideas or I misunderstood something, let me know.

@webdeb
Copy link

webdeb commented Oct 8, 2020

Like you said the explicit flow is easier to implement and I don't see why its annoying if you can define an action which replaces the whole state?

defmodule JobLogs do
  def handle_action(:replace, state) do
    {:noreply, state}
  end
  
  def handle_info({:stdout, _job_id, logs}, state) do
    call_action(:replace, state ++ String.split(logs, "\n"))
  end
end

But ok, lets discuss what we actually already have in a "gen_server", we have "state" and "handlers" We also have the possibility to answer if someone is calling it, :noreply / :reply

The only thing what we need is to make it live through a channel. Ok, if we agree that state changes should be pushed to our live-subscribers, then I would only propose to add a :live_data_key so only those changes are pushed. If empty/omitted all the data is live

use LiveData, live_data_key: :logs

@methyl
Copy link
Contributor Author

methyl commented Oct 8, 2020

Alternative to specifying the key is to have a serialize method which you can do data filtering and transformations on your own, like here: https://github.com/surferseo/live_data/blob/ts-gen/examples/phoenix_livedata_todomvc/lib/phoenix_livedata_todomvc_web/live_data/todo_state_persistent.ex#L85

@harmon25
Copy link

harmon25 commented Oct 8, 2020

Yea the serialize callback allows for clearer/simple memory management like just keeping IDs in state, and fetching them from ets/db in the serialize callback.

Think we are almost all on the same page...

@methyl I agree we should take a look at LiveView to better understand their implementation, and maybe borrow some ideas...

I think adding this as an optional feature to liveview is worth suggesting if we have it working really well.

@harmon25
Copy link

harmon25 commented Oct 13, 2020

Been experimenting with some redux like API for defining reducers/actions on the backend - with just a simple agent to back the storage of state.

Reducer Macro API

use LiveData.Reducer

Root Reducer

defmodule LiveDataDemo.RootReducer do
  use LiveData.Reducer
   # not using this, but considering a mechanism for composing reducers to some arbitrary depth.
   # has to start somewhere - so the `:root` key is how that is defined. 
  def key, do: :root
  # each reducer can have N children that define a key at that depth. (not actually working atm..)
  def children, do: [LiveDataDemo.CounterReducer, LiveDataDemo.AppReducer]

  def default_state() do
    %{
      counter: LiveDataDemo.CounterReducer,
      app: LiveDataDemo.AppReducer,
      }
  end

  def default_state(existing) do
    existing
  end
end

Some actual reducers

Counter

defmodule LiveDataDemo.CounterReducer do
  use LiveData.Reducer

  def key, do: :counter

  def children, do: []

  def action({:add, number}, state, _context) do
    state + number
  end

  def action({:sub, number}, state, _context) do
    state - number
  end

  def action({:reset, _}, _state, _context) do
    default_state()
  end

  def default_state() do
    0
  end

  def default_state(existing_state) do
    existing_state
  end
end

App

defmodule LiveDataDemo.AppReducer do
  use LiveData.Reducer
  @valid_tabs ~w(counter chat modals)

  def key, do: :app

  # children is called to merge reducers that exist at this level of state - but are their own reducer/process.
  def children, do: []

  def action({:set_tab, tab}, state, _context) when tab in @valid_tabs do
    %{state | active_tab: tab}
  end

  def default_state() do
    %{active_tab: "counter"}
  end

  def default_state(existing_state) do
    existing_state
  end
end

The Store module is what ties it all together, and is currently hooked up to the reducers via :application_env - which is not ideal, but pretty sure this is not really a dynamic configuration thing seems pretty static.

Added a callback to the store, that gets invoked after an action has been dispatched - with old_state, new_state, and the store context - which is any map you pass in when starting the store. This can be used to do broadcasting of diffs or whatever else on state change. It also allows for actions dispatched to the store on the backend to be be propagated to the front end.
This is a different approach to an observable gen_server, more about state composition of numerous smaller reducers, and serializing the mutations through a single gen server.

This is completely separate from phoenix + channels - but exposes an API that makes the channel code quite simple...

That repo can be cloned and run pretty easily - doing nothing in the UI, but testing access to store, diffs, and dispatching...
https://github.com/harmon25/live_data_demo/blob/new_process_arch/assets/js/app.js

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants