-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phi #24
Comments
Hi @cgarciae, good to hear from you! There's a lot to like here and I could see a few places where it could be great to integrate your innovations. I'm going to write a few translations, mostly for my own purpose. Here's your doc: P.Pipe(1.0, P + 1, P * 3) # 6.0 With fn.py, this would look like: (F(_ + 1) >> _ * 3)(1.0) # 6.0 So, already, there are a few things that we can gleam from these differences. First, it would be nice to have a constant function defined something like: # In SKI lambda calculus, this would be the K function, but would anyone get K as an alias, K(5)?
# maybe C(5)?
def constant(x):
return lamda *args, **kwargs: x
(F(constant(1.0)) >> _ + 1 >> _ * 3)()
# Another possibility to more cleanly integrate with >> operators
def constant(x):
return F(lamda *args, **kwargs: x)
(constant(1.0) >> _ + 1 >> _ * 3)()
# We could also have F perform as 'constant' if it receives a literal, so
(F(1.0) >> _ + 1 >> _ * 3)() The final solution is the most elegant, but would it lead to subtle bugs where one wouldn't get easy to understand error messages if they intended to return a function but accidentally returned a literal? pipe(F(1.0), _ + 1, _ * 3)() Right off the bat, I think the primary ideas you present above and beyond fn.py have to do with composition branching. Here's the array example from your readme: P.Pipe(1.0, (P + 3) / (P + 1), [P + 1, P * 3]) # [3.0, 6.0] I like the simplicity of the idea, but I'm a little worried about the magic the dsl introduces. It could be confusing in the context of other pythonic code, especially having already introduced the _ and F. In fn.py this would look like: (F(lambda x: (x + 3) / (x + 1)) >> (lambda x: [x + 1, x * 3]))(1.0) Honestly, that isn't so awful. It reads like idiomatic python and it's relatively concise. It does leave me wanting, but I can deal. The branching in the last lambda in your code is the most interesting idea, but I think delegating to a function would make things clearer. I'm in a ramda mood lately, and there they have the function converge. For the lazy, that looks like this in js: multiply( add(1, 2), subtract(1, 2) ); // original
R.converge(multiply, [add, subtract])(1, 2); //=> -3 In python, within the fn context, and with functions pulled from operator or made up, and why not with pipe and F literal, your code might look like: pipe(F(1.0), converge(divide, [F(add, 1), F(mul, 3)]), converge(list, [F(add, 1), F(mul, 3)]))() If we do introduce any kind of infix operator or dsl for a converge style operation (to list or to object), we should look to established patterns in Haskell and Scala. Rusty on haskell, but I'm sure this is somewhere between the applicative monad and 'fork'. A more direct translation of the final lambda might be haskell's 'sequence', which would combine the converge and list functions: pipe(F(1.0), converge(divide, [F(add, 1), F(mul, 3)]), sequence(F(add, 1), F(mul, 3)))() Another innovation in your implementation is that P is always the first arg, whereas we have _ set to retrieve numerically increasing args. This is pulled right from the top of our readme: print (_ + _ * _) # "(x1, x2, x3) => (x1 + (x2 * x3))" It would be interesting to be able to access the positional or named arguments via some mechanism that didn't conflict with property access. I could envision this working like pipe(F(1.0), (_._[0] + 3) / (_._[0] + 1), sequence(_._[0] + 1, _._[0] * 3))() Ok, maybe that could be cleaner, but the idea being we don't lose the existing functionality of incremental positional arguments or property access (e.g _.x or _[0]). Maybe a little magic with _0, _1 and so on would be interesting as shortcuts for this, though that'd mean importing them from fn.py along with _. As I said though, I'm on the fence about the direct branching with []. Maybe others like this? If we take everything from above, it'd mean: pipe(F(1.0), (_0 + 3) / (_0 + 1), [ _0 + 1, _0 * 3 ])()
# vs explicit
pipe(F(1.0), (_0 + 3) / (_0 + 1), sequence(_0 + 1, _0 * 3))()
# or without the pipe and constant F
((_0 + 3) / (_0 + 1) >> sequence(_0 + 1, _0 * 3))(1.0) I have more to say, and I'm sure I want to revise what I've said above, but I'll post this for now as a starting point to see what you and others think about constant/F with literal, converge/sequence/array literal dsl, and positional args access. |
Thanks @low-ghost for taking the time. Response to some of your comments:
from phi import P, Val
assert 3 == P.Run(
Val(1),
P + 2
) Notice I just used P.Pipe(
None,
Val(1), # 1
P + 2 # 1 + 2 == 3
) But for simple things you could also use the assert 9 == 1 >> P + 2 >> P * 3
7 == Pipe(
1,
(
P + 1,
P + 5
)
) Since 7 == Pipe(
1,
P + 1,
P + 5
)
[ 2, 6 ] == Pipe(
1,
[
P + 1,
P + 5
]
)
record = Pipe(
1,
dict(
x = P + 1,
y = P + 5
)
)
record.x == 2
record.y == 6
[ 10, 2 ] == Pipe(
1,
P + 1, # 1 + 1 == 2
{'x'}, # x = 2
P + 3, # 2 + 3 == 5
[
P + 5, # 5 + 5 == 10
'x' # 2
]
) Now, I am also aware that using Python literals can be very confusing at the beginning so I'd to propose you this: Create the functions from phi import P, Rec, Read, Write, Branch
[0.5, 5] == P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
Write('s'), # s = 2
Rec(
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
Branch(
Rec.x / Rec.y #3 / 6 == 0.5
,
Read('s') + 3 # s + 3 == 2 + 3 == 5
)
) Regardless if they are python objects or custom functions you still have to get used to the concepts of sequencing, branching, reading, writting, etc. BranchWith the Branch function this Branch(f, g, h) could be equivalent to lambda x: [ f(x), g(x), h(x) ] RecWith Rec function, this Rec(a = f, b = g, c = h) could be equivalent to lambda x: dict( a = f(x), b = g(x), c = h(x) ) |
@low-ghost hey nice to hear about Ramda again, I contributed a few functions a couple of years ago! |
Hi @cgarciae, thanks for sharing your work with our community! If you want to discuss your project, this project, or FP Python in general might I suggest Gitter? We have a chat room for our team and it has proven useful in the past. At any rate I won't close this issue right now because it looks like you are both in the middle of a conversation; but when when the conversation ends please close the issue. Thank you both. |
The more I think about allowing the literal in a pipe/chain, the more I like it. Does your dict functionality also support literal syntax? like: { 'x': P + 1, 'y': P + 2 } I like the idea of having both the foreign looking infix and literal syntax for shorthand and the longform, I'm not quite as on board with the 'read' and 'write' style functions. It's not only a bit too much magic for me, but it's also using mutation in ways that aren't explicit. I think this is a clearer translation, even though sure, it's annoying from phi import P, Rec, Read, Write, Branch
mut_s = None
def set_s(val):
mut_s = val
[result, final_s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
set_s, # s = 2
Rec(
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
Branch(
Rec.x / Rec.y #3 / 6 == 0.5
,
mut_s + 3 # 2 + 3 == 5
)
) Better though, would be to either enforce breaking up the functions, or passing the value along like: [result, s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
Rec(
s = P
,
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
Branch(
Rec.x / Rec.y #3 / 6 == 0.5
,
Rec.s + 3 # 2 + 3 == 5
)
) The Rec.prop functionality is already built in to fn, with _.x, so the retrieval step here wouldn't need any changes. While I'm on the idea and have the rambda docs open (a great lib), the Rec function looks like applySpec. The if/else functionality of Phi is interesting as well, but I think it'd be more sound to build out a functional approach like if_else(func, func_if_true, func_if_false), or a cond function (more rambda influence here), rather than using dot chaining. Definitely a lot of good things to think about |
@low-ghost Since I first read you comment I've been thinking about whether or not the DSL is a pro or con. But first I'll answer your question: yes, you can use the dictionary literal { 'x': P + 1, 'y': P + 2 } but I like better dict(
x = P + 1,
y = P + 2
) Now, back to the DSL issue. Pros
Cons
DiscussionI've refactor a lot of the code internally to be able to swap-out or rather modify the DSL easily (as a bonus the made the code a lot shorter). If we take out the tuple from phi import P, Val
[ 6, 12 ] == P.Pipe(
1,
[
(
P + 1, #1 + 1 == 2
P * 3 #2 * 3 == 6
),
(
Val(10), #10
P + 2 #10 + 2 == 12
)
]
) can become from phi import P, Branch, Seq
[ 6, 12 ] == P.Pipe(
1,
Branch(
Seq(
P + 1, #1 + 1 == 2
P * 3 #2 * 3 == 6
),
Seq(
10, #10
P + 2 #10 + 2 == 12
)
)
) Here Whats more, About Read & WriteThese are obviously stateful magic stuff, however they are extremely useful, in creating a neural network, for example take this fake binary classifier usingTensorBuilder: from tensorbuilder import T, Branch, Seq
import tensorflow as tf
x = tf.something()
y = tf.something()
[ h, trainer ] == T.Pipe(
x,
T.relu_layer(32)
.relu_layer(16)
.relu_layer(8)
.linear_layer(1)
.Branch(
T.sigmoid() # h
,
T.sigmoid_cross_entropy_with_logits(y) #loss
.minimize(tf.train.AdamOptimizer(0.01)) #trainer
)
) you sometimes need/want to retrieve an intermediate layer as a output like e.g. from tensorbuilder import T, Branch, Seq, Read
import tensorflow as tf
x = tf.something()
y = tf.something()
[ h, trainer, relu8 ] == T.Pipe(
x,
T.relu_layer(32)
.relu_layer(16)
.relu_layer(8).Write.relu8
.linear_layer(1)
.Branch(
T.sigmoid() # h
,
T.sigmoid_cross_entropy_with_logits(y) #loss
.minimize(tf.train.AdamOptimizer(0.01)) #trainer
,
Read.relu8
)
) or maybe like this is a little bit more readable (both would work) from tensorbuilder import T, Branch, Seq, Read
import tensorflow as tf
x = tf.something()
y = tf.something()
[ h, trainer, relu8 ] == T.Pipe(
x,
T.relu_layer(32)
.relu_layer(16)
.relu_layer(8).Write("relu8")
.linear_layer(1)
.Branch(
T.sigmoid() # h
,
T.sigmoid_cross_entropy_with_logits(y) #loss
.minimize(tf.train.AdamOptimizer(0.01)) #trainer
,
Read("relu8")
)
) As you see I could just Write any intermediate layer and then Read it in the last Branch, super easy. |
I certainly get the usefulness of saving intermediate values in the course of functional composition. I'm on board with bringing some kind of state management into fn.py, but we should consider established patterns. First, I'm still of the opinion that both of these are a little clearer than your second example, albeit causing more code in the first, and potentially a lot of nesting/tracking in the second if multiple 'writes' had to occur. (Which is why I say, yeah, state management would be good for more complicated examples) Breaking the code at the point of assignment: from tensorbuilder import T, Branch, Seq
import tensorflow as tf
x = tf.something()
y = tf.something()
relu8 = T.Pipe(
x,
T.relu_layer(32)
.relu_layer(16)
.relu_layer(8)
[ h, trainer ] = T.Pipe(
relu8
T.linear_layer(1)
.Branch(
T.sigmoid() # h
,
T.sigmoid_cross_entropy_with_logits(y) #loss
.minimize(tf.train.AdamOptimizer(0.01)) #trainer
)
) and maintaining the value via seq or similair: from tensorbuilder import T, Branch, Seq
import tensorflow as tf
x = tf.something()
y = tf.something()
[ relu8, [ h, trainer ] = T.Pipe(
x,
T.relu_layer(32)
.relu_layer(16)
.relu_layer(8)
.Seq(
T,
T.linear_layer(1)
.Branch(
T.sigmoid() # h
,
T.sigmoid_cross_entropy_with_logits(y) #loss
.minimize(tf.train.AdamOptimizer(0.01)) #trainer
)
)
) I'm sure I not only massively messed up formatting there, but also the intent. One of the established patterns I can think of is the state monad, though I'd have to read up on if it were applicable here. Maybe scala has something also (other than their own version of a state monad that is). My idea here being that we should thoroughly investigate existing approaches before creating new semantics. On another note, why the capitals for Branch and Seq? Finally, another advantage to the literal syntax (e.g. (F(10) >> [ _ + 1, _ + 2 ])) is that it actually is intuitive after you get past the abruptness of it. I should be able to do anything I want with _, including .x, _ + 1, (, _ + 2), [ x**2 for x in _ ], I don't know. And one of the problems with rambda is the evolving complexity and sheer number of functions. Do I want 'converge', 'over', 'ap', 'applySpec', 'evolve', and so on, all of which do relatively similar things. There's something nice to seeing [ _ + 1, _ + 2 ] instead of a plethora of possible functions. Then again, if we only provide a few straightforward functions, e.g. branch, seq, dict, etc, it could have the same effect. |
State MonadInitially the implementation internally had something like the State Monad, the state was a dictionary that was passed through the DSL so Read and Write could do their thing. However, it had some limitation that if you wanted to solve them properly you had to make all functions take in the incoming value and the state, and return a tuple with the new value and the new state. This might sound lazy but it was easier to implement through a global state using Also, you example: from phi import P, Rec, Read, Write, Branch
mut_s = None
def set_s(val):
mut_s = val
[result, final_s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
set_s, # s = 2
Rec(
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
Branch(
Rec.x / Rec.y #3 / 6 == 0.5
,
mut_s + 3 # 2 + 3 == 5
)
) doesn't work properly because of the order in with things are evaluated but it can easily be corrected like this from phi import P, Rec, Read, Write, Branch
mut_s = None
def set_s(val):
mut_s = val
[result, final_s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
set_s, # s = 2
Rec(
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
Branch(
Rec.x / Rec.y #3 / 6 == 0.5
,
lambda x: mut_s + 3 # 2 + 3 == 5
)
) I still think it would be easier to pretend you its implemented using the State Monad, educate people about the State Monad, but implement it statefully how it is right now and get the benefits of from writing less code. Why the capitals for Branch and Seq?As you saw TensorBuilder's
This is because Phi's By the way, the top level "function" like e.g. Read = P.Read
Write = P.Write
Val = P.Val
Pipe = P.Pipe
Branch = P.Branch
Seq = P.Seq
... Thanks to them actually being methods you can write compact things like f = (P * 5).Branch(P % 3, P - 2)
f(2) == [ 1, 8 ] # [ 2 * 5 % 3, 2 * 5 - 2 ] == [ 10 % 3, 10 - 2 ] == [ 1, 8 ] The DSLI just created a branch on github that removed the DSL, I am pretty happy since internal complexity was reduced a lot and now 10 >> Branch( _ + 1, _ + 2) # [ 11, 12 ]..... works! and 10 >> Branch( 1000, _ + 1, _ + 2) # [ 1000, 11, 12 ]..... also works! Rambda & Functional LibrariesWhat I think (please correct me on this) that what most FP libraries are lacking is the ability to integrate existing functions into their flow of work and stay with a minimal set of functions. Its always easy to integrate something like def some_fun(x):
#code
return y to you FP code, but its harder to integrate def some_harder_fun(x1, x2, x3):
#code
return y You can do all sort of partial, currying, etc. My take on this was stick with the single argument function principal and create partials that apply it into the correct spot. With phi you can use from phi import P, Then
P.Pipe(
"Hello ",
P + "World",
Then(some_harder_fun, True, 10) # some_harder_fun( "Hello World", True, 10)
) to pipe the incoming value to the first argument. But what happens if the value had to go on the second argument? Use from phi import P, Then2
P.Pipe(
"Hello ",
P + "World",
Then2(some_harder_fun, True, 10) # some_harder_fun(True, "Hello World", 10)
) And if you use P.Register2(some_harder_fun, "mylib") # DONT DO THIS ON THE P OBJECT, create your own class and then use it as a method P.Pipe(
"Hello ",
P + "World",
P.some_harder_fun(True, 10) # some_harder_fun(True, "Hello World", 10)
) This is how the all TensorBuilder works. |
I think I've settled into some thinking here and might even be able to convey it with a bit of brevity. Maybe it's even just this: fn.py's underscore should strive to support any right-hand expression. Or, essentially what can be returned from a lambda. With that in mind, these conversions are intuitive: lambda x: x + 1
_ + 1
filter(lambda x: x < 6, [1, 3, 5, 7, 9])
filter(_ < 6, [1, 3, 5, 7, 9])
lambda x: x['prop']
_['prop']
lambda x: x.prop / 10
_.prop / 10 That's about the limit of our support. To strive towards the above goal of all RHS support, then we should also support these: lambda x: [x, 2, 3]
[_, 2, 3]
lambda x: {'a': x + 6, 'b': 2, 'c': 3}
{'a': _ + 6, 'b': 2, c: 3}
lambda x: [y + 1 for y in x]
[y + 1 for y in _] Though there is some magic happening here, the logic is intuitive and expected. If I were new to fn.py, I might just assume that these were already supported. Of course, they won't function correctly on their own but should be able to exist inside of an F or a >>/<</compose/pipe function, like F([_, 2, 3])(1) # [1, 2, 3]
(_ + 1 >> [_, 2, 3] >> _[0])(1) # 2
pipe(_ + 1, [_, 2, 3], _[0])(1) # 2 for contrived examples. This quickly brings us to needing to be able to reference the same placeholder multiple times within the RHS. I'd actually want this b/c it is a simple extension of the above logic: lambda x: x + x
_ + _ but because we already have the standard of multiple underscores standing in for incrementally increasing positional arguments, I'd advocate for something close to: from fn import _0
(lambda x: x + x)(1)
(_0 + _0)(1) #2
(lambda x: [y + 1 for y in x if x > 2])([1, 2, 3, 4])
F([y + 1 for y in _0 if _0 > 2])([1, 2, 3, 4]) # [4, 5]
... So far we've introduced almost nothing new here logically and have in fact only simplified the reasoning around _. If one wanted to bring in particular functions instead of the literal construction, standard libs already make them available in the forms of 'tuple', 'list', 'dict', 'namedtuple', etc. instead of Branch and Seq etc. Now briefly on to state management. I like the idea, but one of the eventual goals of fn.py is to integrate monads whole-heartedly. In that case >> is not monadic composition, but merely function composition (equivalent to '.' in haskell). We'd introduce a new syntax for do blocks and true monadic operations that would stand beside the standard >>'s. If we were to introduce the state monad with methods like 'put', 'get', 'eval_state' and so on, it might be highly confusing to also have a pseudo state monad built into function composition. However, monads are (perhaps) inherently complicated and the state monad more so. It might be nice to have something easy to reach for without having to go all in. All that in mind, I'd say lets wait until we get a clearer picture of how haskellian functionality will be included before integrating something like your Read and Write functions. What do you think of all this? I'm just one person obviously and shouldn't speak for all of fn.py, so we should get input from the rest of the contributors as well. |
Wow! Love your idea about supporting "right hand expressions". They integrate very well with what I've implemented so far. I think I'll implement it right away but I'll extend to work on a very basic level: from phi import P
f = P + 1 >> [ P, 2, 3 ]
f(0) == [ 1, 2, 3 ] Then you also use it with Seq from phi import P, Seq
f = Seq(
P + 1,
[ P, 2, 3 ]
)
f(0) == [ 1, 2, 3 ] but I'll also implement it for all literals (tuple, dict, list, set). Internally what will actually happen is that [ P, 2, 3 ] will be interpreted as Branch( P, Val(2), Val(3) ) If you have more ideas like this, please keep them coming! StateI managed to implement Current StuffNow Pipe(1, P + 1, P * 2) == 4 is really just Seq(1, P + 1, P * 2)(None) == 4 since is really parsed to Seq(Val(1), P + 1, P * 2)(None) == 4 |
UpdateAll these work now! f = Seq(
P + 1,
[ P, 2, 3 ]
)
assert f(0) == [ 1, 2, 3 ]
f = Seq(
P + 1,
( P, 2, 3 )
)
assert f(0) == ( 1, 2, 3 )
f = Seq(
P + 1,
{ P, 2, 3 }
)
assert f(0) == { 1, 2, 3 }
f = Seq(
P + 1,
{"a": P, "b": 2, "c": 3 }
)
assert f(0) == {"a": 1, "b": 2, "c": 3 } |
>>I think I'll have to back down from this P + 1 >> [ P, 2, 3 ] because now 1 >> P + 1 will not evaluate to 2 but rather a lambda equivalent to lambda _: 2 What do you think the behavior should be? |
I was thinking about some potential conflicts. What I'd say to that issue is wrap units in parens: (P + 1) >> [P, 2, 3] But, this might also be interpreted as a tuple and you'd end up with (tuple((P, P + 1)) >> [P, 3, 4])(1) # [(1, 2), 3, 4], assuming the user wanted a tuple in a list and not allowing tuple literal syntax. Generator comprehension will also be tricky. I'm sure there are some other cases where decidability might be hard to determine and we'd have to specify that one must use the constructing function. Also, and here I admit that I haven't yet spent much time looking into fn's internals, but fn does support this, likely by how it converts units between >>. (F() >> _ + 1 >> _ * 2)(2) # 6 Would you want to start working on merging some of this into fn.py as a pr? If not, I don't mind taking a crack at it also, when I get the time. EDIT: tuple creation in python always seemed strange to me. Why do I have to call |
Assuming the users wanted a tuple inside a listI'd go with P >> (P, P+1) >> [ P, 2, 3] or even create a custom Tuple(P, P+1) >> [ P, 2, 3] This calls for the question, given that [result, final_s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
Write('s'), # s = 2
Dict(
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
List(
Dict.x / Dict.y #3 / 6 == 0.5
,
Read('s') + 3 # 2 + 3 == 5
)
) vs [result, final_s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
Write('s'), # s = 2
Rec(
x = P + 1 #2 + 1 == 3
,
y = P * 3 #2 * 3 == 6
),
Branch(
Rec.x / Rec.y #3 / 6 == 0.5
,
Read('s') + 3 # 2 + 3 == 5
)
) Independent of this, as of now you also have the literal sintax [result, final_s] = P.Pipe(
1.0, #input 1
(P + 3) / (P + 1), # ( 1 + 3 ) / ( 1 + 1 ) == 4 / 2 == 2
Write('s'), # s = 2
{
'x' : P + 1 #2 + 1 == 3
,
'y' : P * 3 #2 * 3 == 6
},
[
Rec.x / Rec.y #3 / 6 == 0.5
,
Read('s') + 3 # 2 + 3 == 5
]
) About creating a Pull RequestI know some basic stuff from the internals of the operator overloading since I took code from there to modify it for Phi, however, I don't know why there is a distinction between The other thing is that if you are on-board with the DSL stuff I could move the Actually one of the reasons I didn't name About ">>" againI think I am going for this interpretation: 1 >> P is equivalent to lambda x: 1 |
I thinks this is a very neat example that now works f = P * [ P ]
assert f(0) == []
assert f(1) == [1]
assert f(3) == [3,3,3] |
Hi, I have created this library called
phi
: https://github.com/cgarciae/phiIt has a modified version of
fn.py
's lambdas (it creates only single argument functions) but includes a bunch of other stuff to make life easier, specially to create fluent libraries. It was originally created as part of TensorBuilder which is a library based on TensorFlow for Deep Learning, but I decided to decouple it.I would love some feedback from you since the FP community in Python is rather small.
Feel free to close the issue.
The text was updated successfully, but these errors were encountered: