-
Notifications
You must be signed in to change notification settings - Fork 0
/
ordinal_games.tex
3484 lines (2837 loc) · 276 KB
/
ordinal_games.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[11pt]{article} % use larger type; default would be 10pt
\input{../common/preamble}
\title{Games with ordinal sequences of moves}
\author{John Gowers}
\date{}
\begin{document}
\begin{titlepage} % Shamelessly stolen from the online LaTeX book.
\centering
{\scshape\LARGE University of Bath \par}
\vspace{1cm}
{\scshape\Large PhD Confirmation Report \par}
\vspace{1.5cm}
{\huge\bfseries Games with ordinal sequences of moves \par}
\vspace{2cm}
{\Large\itshape John Gowers \par}
\vfill
supervised by\par
Dr.~James \textsc{Laird}
\vfill
% Bottom of the page
{\large \today\par}
\end{titlepage}
\addtocontents{toc}{\protect\enlargethispage{35mm}}
\tableofcontents
\thispagestyle{empty}
\setcounter{tocdepth}{2}
\section{Introduction}
\lettrine[image=true, lines=6, findent=3pt, nindent=0pt]{cleos-g.png}{ame Semantics}, or the use of games and strategies to model logical systems and programming languages, has its origins in a model of intuitionistic logic by Lorenzen \cite{lorenzengames} using games. Towards the end of the twentieth century, several authors, starting with Blass \cite{blassgames}, realized that certain natural operations on games could be used to model the connectives in Girard's linear logic. Major results from this enterprise include:
\begin{itemize}
\item A fully complete categorical semantics for the multiplicative fragment of linear logic, together with the MIX rule, by Abramsky and Jagadeesan \cite{abramskyjagadeesangames}
\item A fully complete categorical semantics for the multiplicative fragment of linear logic without the MIX rule, by Hyland and Ong \cite{HylandOngGames}
\item A modification of Abramsky and Jagadeesan's construction, giving a fully complete model of the multiplicative-exponential fragment of linear logic, by Baillot, Danos, Ehrhard and Regnier \cite{bion}
\item Fully complete models of the negative fragment of linear logic by others, starting with independent work by Curien \cite{curiengames} and Lamarche \cite{lamarchegames}.
\end{itemize}
Another major early result was the use of game semantics to give a fully abstract model of the programming language PCF (This result was obtained independently, using different techniques, by Abramsky, Jagadeesan and Malacaria \cite{ajmPcf}, by Hyland and Ong \cite{hoPcf}, and by Nickau \cite{nickauPcf}).
Central to much of the current work on game semantics for linear logic is the treatment of the exponential connective $\oc$. Several approaches have been made to model this connective in a game-theoretic way (see Section 3 of \cite{martinsthesis} for a brief survey). We shall use the construction from \cite{ajmPcf} and \cite{hyland1997games}: if $A$ is a game, then $\oc A$ is the game formed by playing a countably infinite number of copies of $A$ --- $A_0,A_1,A_2,\dots$ --- in parallel, where only the opponent $O$ may switch games and where neither player may play in game $A_{n+1}$ until a move has been made in game $A_n$.
This definition, and in particular the restriction on the order in which games may be started, allows us to give the game $\oc A$ the structure of the \emph{cofree commutative comonoid} over $A$ \cite{LairdCofCommCom} and gives rise to a \emph{linear exponential comonad} on our category of games. The linear exponential comonad \cite{SchalkWhatIs} and the cofree commutative comonoid functor \cite{LafontCofCommCom,MelliesCofCommCom} are both sufficient to model the exponential connective from linear logic.
The restriction that says that the copies of $A$ in $\oc A$ must be opened in order, motivates the following definition \cite{laird02}: if $A,B$ are games, then the \emph{sequoid} $A\sequoid B$ of $A$ and $B$ is the game in which $A$ and $B$ are played in parallel, only the opponent $O$ may switch games, and play must start in $A$. It turns out that if $A$ is a game, then the operation $A\sequoid\blank$ gives rise to an endofunctor on the category $\G$ of games, and that the exponential $\oc A$ is the final coalgebra for this endofunctor. That is to say, we have a morphism $\cmap{\alpha}{\oc A}{A\sequoid\oc A}$ such that whenever $\cmap{\sigma}{B}{A\sequoid B}$ is a morphism, there is a unique morphism $\cmap{\fcoal{\sigma}}{B}{\oc A}$ such that the following diagram commutes:
\[
\begin{tikzcd}
B \arrow[r, "\sigma"] \arrow[d, "\fcoal\sigma"']
& A\sequoid B \arrow[d, "A\sequoid\fcoal\sigma"] \\
\oc A \arrow[r, "\alpha"']
& A\sequoid\oc A
\end{tikzcd}
\]
Existing proofs of this fact in a finitary games setting (e.g., in \cite{martinsthesis}) make implicit use of the fact that $\oc A$ is the limit of the sequence
\[
I \leftarrow A \leftarrow A\sequoid A \leftarrow A\sequoid(A\sequoid A) \leftarrow \dots
\]
Indeed, any finite-length play in $\oc A$ must involve only finitely many copies of $A$, so exists as a play in $(A\sequoid\blank)^n I$ for some $n$.
On the other hand, when we start imposing winning conditions upon infinite plays, $\oc A$ is no longer the limit of the sequence above. Indeed, define a game $\co A$ which is exactly like $\oc A$ except that all infinite plays that contain moves from infinitely many copies of $A$ are wins for the player $P$, regardless of who has won the individual games. This game still admits a cone over the sequence above, since any play occurring in $(A\sequoid\blank)^nI$ for some $n$ must involve only finitely many copies of $A$. Moreover, it does not admit a morphism of cones to $\oc A$. In fact, in the infinitary setting, $\co A$, and not $\oc A$, is the limit of the above sequence.
\[
\begin{tikzcd}
\co A \arrow[d] \arrow[dr] \arrow[drr] \arrow[drrr] \arrow[drrrr, phantom, yshift=0.5em, "\cdots"]
&
&
&
& \\
I
& A \arrow[l]
& A\sequoid A \arrow[l]
& A\sequoid(A\sequoid A) \arrow[l]
& \cdots \arrow[l]
\end{tikzcd}
\]
Applying the functor $A\sequoid\blank$ to the cone from $\co A$ gives rise to another cone over the sequence:
\[
\begin{tikzcd}
%
& A\sequoid\co A \arrow[dl] \arrow[d] \arrow[dr] \arrow[drr] \arrow[drrr, phantom, yshift=0.5em, "\cdots"]
&
&
& \\
I
& A \arrow[l]
& A\sequoid A \arrow[l]
& A\sequoid(A\sequoid A) \arrow[l]
& \cdots \arrow[l]
\end{tikzcd}
\]
and therefore a morphism $A\sequoid\co A\to \co A$. We can then continue to apply the functor $A\sequoid\blank$ to build up a second layer of the sequence:
\[
\begin{tikzcd}
\co A \arrow[d] \arrow[dr] \arrow[drr] \arrow[drrr] \arrow[drrrr, phantom, xshift=1em, "\cdots"]
& A\sequoid\co A \arrow[l]
& A\sequoid(A\sequoid \co A) \arrow[l]
& A \sequoid(A\sequoid(A\sequoid\co A)) \arrow[l]
& \cdots \arrow[l] \\
I
& A \arrow[l]
& A\sequoid A \arrow[l]
& A\sequoid(A\sequoid A) \arrow[l]
& \cdots \arrow[l]
\end{tikzcd}
\]
What is the limit of this entire diagram? An $O$-winning infinite play $s$ in $(A\sequoid\blank)^n \co A$ may be identified with an $O$-winning play in $\oc A$ such that either the opponent $O$ wins in one of the games $A_1,\dots,A_n$ or that only finitely many copies of $A$ are opened. It follows that the limit of the diagram \emph{is} the final coalgebra $\oc A$.
The diagrams we have drawn above are special cases of the \emph{final sequence} \cite{finalseq}: given a category $\C$ with a terminal object $1$ and enough limits (in a sense which will become clear) and an endofunctor $\cmap{\F}{\C}{\C}$, we may construct an ordinal-indexed sequence
\[
1, \F(1), \F^2(1),\dots \F^\omega(1), \F^{\omega+1}(1),\dots,\F^\delta(1),\dots
\]
together with commuting morphisms $\F^\gamma(1)\to\F^\delta(1)$ for $\delta\le\gamma$, by setting $\F^{\delta+1}(1) = \F(\F^\delta(1))$ and letting $\F^\mu(1)$ be the limit of the diagram formed by the $\F^\gamma(1)$ for $\gamma<\mu$ if $\mu$ is a limit-ordinal. It is well known that if $\delta$ is an ordinal and the map
\[
\F^{\delta+1}(1)\to\F^{\delta}(1)
\]
is an isomorphism, then the inverse of this isomorphism exhibits $\F^{\delta}(1)$ as the final coalgebra of $\F$. In this case, we say that the final sequence \emph{stabilizes at $\delta$}.
In the case of finitary games, then, we have
\[
\oc A\cong(A\sequoid\blank)^\omega(I)
\]
and in the infinitary case, we have
\[
\oc A\cong(A\sequoid\blank)^{\omega2}(I)
\]
The eventual goal of this research is to study the behaviour of the final sequence of the functor $A\sequoid\blank$ in categories of games. Since the sequence stabilizes so early on for existing categories of games, this question is not very interesting in that setting. We want to modify our definitions of games in such a way that this final sequence will take much longer to stabilize, and it turns out that one way to do that is to allow plays of transfinite length.
Games with ordinal sequences of moves are not a new idea. They have, for example, been studied in \cite{longgamesbook} from the point of view of determinacy. Even in game semantics, as we shall see, we may identify games in which we impose a winning condition on the infinite plays with games in which the players are allowed to make plays of length $\omega$ -- we shall call these $(\omega+1)$-games, in contrast to the finitary games, which we shall call $\omega$-games. Allowing plays whose lengths are longer transfinite ordinals will give rise to a richer variety of plays, which will be reflected in the final sequence.
One important aspect of the shift from finite to transfinite plays is that the idea of a \emph{play} or \emph{position} takes on a new importance, in contrast to the finitary case, where we can define everything in terms of \emph{moves}. Normally, a move is designated as either a $P$-move or an $O$-move; we will extend this so that every position is either a $P$-position or an $O$-position. A position $s$ will be an ordinal-indexed sequence of moves; if there is a last move $a$, then $s$ is a $P$-position or an $O$-position according as $a$ is a $P$-move or an $O$-move. If the length of $s$ is a limit ordinal, however, then its sign cannot be determined from the signs of its constituent moves. This is useful if we want to model games with a winning condition as $(\omega+1)$-games: a play of length $\omega$ in a game corresponds to an infinite play in that game, and its designation as a $P$-position or $O$-position is essentially the same thing as saying whether it is a $P$-winning or $O$-winning play.
This approach is similar to that of Blass \cite{blassgames}, and indeed our treatment of $P$-positions and $O$-positions under the connectives will follow his approach. For example, a position $s$ in a tensor product $A\tensor B$ is a $P$-position if and only if its restrictions to $A$ and $B$ are both $P$-positions.
To avoid certain mathematical difficulties, we shall follow Curien, Lamarche and others by requiring that our games be \emph{negative} -- that is, they must always begin with an $O$-move. In fact, we shall go further than that: under the usual alternating condition, saying that $O$ always starts is the same thing as requiring that the empty position $\emptyplay$ is a $P$-position. In order to avoid problems constructing our category, we shall need to require that \emph{all} positions whose lengths are limit ordinals be $P$-positions. We shall call games satisfying this property \emph{completely negative}.
This condition on the sign of games allows us to construct a well-behaved category of games, but it has one serious problem that occurs as soon as we consider games with transfinite-length plays: if $A,B$ are completely negative games, then the linear implication $A\implies B$ is not necessarily completely negative.
We want $A\implies B$ to be the internal hom in our closed monoidal category, so we need to find a way around this. Our solution is to construct, for every game $A$, a `minimal extension' $A^{\cn}$ of $A$ that is completely negative. We prove various results about the relationship $A^{\cn}$ and $A$ that make this precise and then use them to show that the game $A\impliescn B\coloneqq (A\implies B)^{\cn}$ does give rise to a closed monoidal structure in our category. We also give a tentative way of constructing a category whose objects are \emph{all} games, not just the completely negative ones, and use our results to show that this category is in fact equivalent to our existing category of completely negative games.
Having constructed a symmetric monoidal closed category of games, we introduce the sequoid operator and the exponential, and study the final sequence. We shall characterize the exponential $\oc A$ as a final coalgebra for the functor $A\sequoid\blank$ and use this characterization to prove that $\oc A$ admits the structure of a cofree commutative comonoid on $A$. Then we turn to the final sequence. This section will involve some material that is more ordinal-theoretic than game-theoretic; we assign an ordinal \emph{rank} to a (transfinite) sequence of natural numbers and then show that this idea of rank can be used to characterize each term of the final sequence for a functor of the form $A\sequoid\blank$. We will show that the reason that the final sequences converge so quickly for finitary games and win-games is that a sequence of length $\omega$ necessarily has low rank: once we allow longer plays, we can get sequences with higher ranks and the final sequence will carry on for longer.
\subsection{Original contribution}
Games with ordinal sequences of enablings have been studied by Laird from a Game Semantics point of view \cite{LairdOrdinalGames}, but to my knowledge this is the first transfinite games category built in the tradition of Abramsky and Jagadeesan. Passing to transfinite games causes problems when we try to show that our category is monoidal closed, and we have to come up with new techniques in order to get a monoidal closed category (see Sections \ref{TransSMCCSex} to \ref{ImpliesCnSex}).
The characterization of the final coalgebra for the functor $A\sequoid\blank$ in a transfinite games category is new, though the behaviour in finitary game categories and win-game categories is well known. Our definition of the rank of a sequence of natural numbers is heavily motivated by the game semantics, so it is likely that this is new too (though similar constructions of ordinal ranks of objects are common in mathematics).
Otherwise, most of the material in this report is a straightforward translation of the existing game semantics of Abramsky, Jagadeesan, Hyland, Ong, Curien, Lamarche and so on into the transfinite setting.
\subsection{Conventions}
I shall denote the composition of morphisms $A\xrightarrow{\sigma}B\xrightarrow{\tau}C$ by $\comp\tau\sigma$, the disjoint union of two sets $X,Y$ by $X\cprd Y$ and the corresponding copairing of two functions $\map{f}{X}{Z},\map{g}{Y}{Z}$ by $\cmap{f\cprd g}{X\cprd Y}{Z}$.
Following the $\forall$belard/$\exists$loise convention that is sometimes used for Opponent/Player, I shall refer to the opponent $O$ as `he' and the player $P$ as `she' throughout.
\subsection{Acknowledgements}
It goes without saying that I owe my biggest debt of gratitude for the material in this report to my supervisor, Jim Laird. Dr Laird introduced me to Game Semantics a year ago and suggested my initial direction of research in the subject, and his insights and instinct for the subject have been invaluable in the development of the material contained in this report, and in the preparation of the report itself -- particularly when things were going wrong. Dr Laird gave me some extremely useful feedback on an earlier draft of this report and supplied some of the details for the material in section \ref{SeqExpIsCofCommCom}.
I am indebted to all the authors listed in the Bibliography, but I single out the seminal papers of Blass \cite{blassgames} and Abramsky-Jagadeesan \cite{abramskyjagadeesangames} for their development of the subject and their clear presentation of their full completeness results, the set of notes on Game Semantics by Martin Hyland \cite{hyland1997games} and the excellent online notes by Andrea Schalk \cite{Schalk2001GsNotes,SchalkWhatIs}. I also single out the PhD thesis of Martin Churchill \cite{martinsthesis}, prepared under Dr Laird's supervision. The work in this report can be seen as a partial sequel to some of Dr Churchill's work, and the clear presentation in his thesis has greatly helped in this task.
I would also like to extend my thanks to Paul Blain Levy for his work organizing the GaLoP 2016 conference and the Midlands Graduate School 2016, both of which I attended this year. I presented some of this material at GaLoP and the questions and comment I received there have shaped the course of this work. Thanks also to Alexander Kurz for his lectures on coalgebra at the Midlands Graduate School.
Thanks go to all my friends and family members who have patiently supported me while I embarked on this research. Thank you to CN for designing the splendid letter `G' (complete with $\sequoid$ symbols!) that features at the beginning of this introduction.
\section{Our starting category of games}
Before studying games with transfinite sequences of moves, we shall illustrate some of the choices we have made by defining a category of games with finite sequences of moves. We have chosen these definitions because they extend particularly well to the transfinite case.
\subsection{Games and strategies}
We shall use the notation introduced in \cite{abramskyjagadeesangames} to describe games. All our games $A$ will have, at their heart, the following three pieces of information:
\begin{itemize}
\item A set $M_A$ of possible moves
\item A function $\cmap{\lambda_A}{M_A}{\OP}$ assigning to each move the player who is allowed to make that move
\item A prefix-closed set $P_A\subset M_A^*$ of finite sequences of moves.
\end{itemize}
We shall normally insist on an \emph{alternating condition} on $P_A$:
\begin{description}
\item[Alternating condition] If $a,b\in M_A$ are moves and $s\in M_A^*$ is a sequence of moves such that $sa, sab\in P_A$, then $\lambda_A(a)=\neg\lambda_A(b)$.
\end{description}
As in \cite{abramskyjagadeesangames}, we identify a \emph{strategy} for a game $A$ with the set of sequences of moves that can occur when player $P$ is playing according to that strategy so that a typical definition of a (partial) strategy might be a set $\sigma\subset P_A$ such that (for all $s\in M_A^*, a,b\in M_A$):
\begin{itemize}
\item $\emptyplay\in\sigma$ (ensures that $\sigma$ is non-empty)
\item If $sa\in\sigma$, $\lambda_A(a)=P$ and $sab\in P_A$ then $sab\in\sigma$ ($\sigma$ contains all legal replies by player $O$)
\item If $s,sa,sb\in\sigma$ and $\lambda_A(a)=P$ then $a=b$ ($\sigma$ contains at most one legal reply by player $P$)
\end{itemize}
We can impose additional constraints on $\sigma$ that will ensure that $\sigma$ is total, strict, history free and so on. The definition given immediately above is not the only definition of a strategy found in the literature, however. For example, the games described in \cite{abramskyjagadeesangames} have the curious property that the set $P_A$ may contain plays that cannot actually occur when $A$ is being played; in particular, all plays must start with a move by player $O$, but the set $P_A$ may contain positions that start with a $P$-move. These plays do not affect the strategies for $A$, but they might come into play if we perform operations on $A$ such as forming the negation $\neg A$ or the implication $A\implies B$.
This behaviour is made implicit in Abramsky and Jagadeesan's definitions, which do not impose any conditions upon the set $P_A$ beyond the basic alternation condition given above, but which mandate that any play occurring \emph{in a strategy} must begin with an $O$-move. For the sake of clarity, we adopt a different, but completely equivalent, approach. For a game $A$, we define a set $L_A$, regarded as the set of \emph{legal plays} occurring in $P_A$. In some games models, such as that found in \cite{blassgames}, $L_A$ may be defined to be the whole of $P_A$, while in \cite{abramskyjagadeesangames} it is defined to be that subset of $P_A$ consisting of plays that begin with an $O$-move.
The point of specifying $L_A$ separately is that it allows us to unify the definition of a \emph{strategy}, while making clearer the behaviour observed above, whereby certain plays in $P_A$ may not occur `in normal play'; this behaviour was previously only implicit in the definition of a strategy. Our unified definition then becomes:
\begin{definition}
If $A=(M_A,\lambda_A,P_A)$ is a game and $L_A$ is its associated set of legal plays (in a particular games model) then a (partial) \emph{strategy} for $A$ is a subset $\sigma\subset L_A$ such that for all $s\in L_A$ and all $a, b\in M_A$:
\begin{itemize}
\item $\emptyplay\in\sigma$
\item If $s\in\sigma$ and $a$ is an $O$-move, and if $sa\in L_A$, then $sa\in\sigma$
\item If $s\in\sigma$ and $a,b$ are $P$-moves, and if $sa,sb\in\sigma$, then $a=b$
\end{itemize}
\end{definition}
\subsection{Positive and negative games, ownership of plays and connectives}
Abramsky-Jagadeesan games, as described in \cite{abramskyjagadeesangames}, may admit both plays that start with a $P$-move and plays that start with an $O$-move. Other games models, such as those found in \cite{blassgames} and \cite{curiengames}, are more restrictive. The games in \cite{curiengames} only contain plays starting with an $O$-move. The plays in \cite{blassgames} may start with either a $P$-move or an $O$-move, but a play starting with a $P$-move and a play starting with an $O$-move may not occur in the same game.
\begin{definition}
We say that a game $A=(M_A,\lambda_A,P_A)$ is \emph{positive} if every play in $P_A$ begins with a $P$-move.
We say that $A$ is \emph{negative} if every play in $P_A$ begins with an $O$-move.
\end{definition}
So the Curien model found in \cite{curiengames} admits only negative games, the Blass model in \cite{blassgames} admits positive and negative games, while the Abramsky-Jagadeesan model found in \cite{abramskyjagadeesangames} admits not only positive and negative games, but also games that are neither negative nor positive. We shall now examine the reasons for and drawbacks of each of these choices.
The earliest games model, found in \cite{conwaygames}, did not include a definition of which player is to move at a given position; rather, games are defined recursively as pairs of games $\{L|R\}$, where $L$ represents the positions that the left player may move into, while $R$ represents the positions that the right player may move into. Blass's definition departs completely from this tradition; now, at every position $s$ only one of the two players is allowed to move; extending this logic on to the empty position $\emptyplay$, it follows that all games are either positive or negative. This property means that we may freely define $L_A=P_A$, since there is never any question about whose turn it is to play. By contrast, if we were to define $L_A=P_A$ for Abramsky-Jagadeesan games, then a strategy might end up containing two branches, one of plays beginning with an $O$-move and one of plays beginning with a $P$-move, which is undesirable. The alternative definition of $L_A$ avoids this problem.
In the case of a Blass game $A$, we may define a function $\zeta_A\colon P_A\to\OP$ that says which player owns each play; the idea is that if we are in position $s$, then the next player to move is given by $\neg\zeta_A(s)$; i.e., the opposing player to the player who has just made the move. One might want to define $\zeta_A$ by setting $\zeta_A(sa)=\lambda_A(a)$, so that ownership of a play is decided by who has made the last move in the play, but this definition does not extend in an obvious way to the empty position $\emptyplay$ (and, as we shall see in the next chapter, it does not extend to plays over limit ordinals). In this case, $\zeta_A(\emptyplay)$ is part of the game's data, and it determines whether the game is positive or negative: if $\zeta_A(\emptyplay) = P$ then all plays must start with an $O$-move, and the game is negative -- and vice versa.
An important question then arises: how should we extend the function $\zeta_A$ to games formed from connectives? The solution adopted by Blass is to use binary conjunctions to deduce the ownership of a play from the ownership of the restrictions of that play to the two component games. In the case of the tensor product $A\tensor B$ of two games $A$ and $B$, we define $\cmap{\zeta_{A\tensor B}}{P_{A\tensor B}}{\OP}$ by setting
\[
\zeta_{A\tensor B}(s) = (\zeta_A(s\vert_A) \wedge \zeta_B(s\vert_B))
\]
where $\cmap{\wedge}{\OP\times \OP}{\OP}$ is as in Figure \ref{truthtables}.
\begin{figure}[h]
\begin{center}
$\begin{array}{cc|c}
a & b & a \wedge b \\
\hline
O & O & O \\
O & P & O \\
P & O & O \\
P & P & P
\end{array}$
\quad
$\begin{array}{cc|c}
a & b & a \vee b \\
\hline
O & O & O \\
O & P & P \\
P & O & P \\
P & P & P
\end{array}$
\quad
$\begin{array}{cc|c}
a & b & a \Rightarrow b \\
\hline
O & O & P \\
O & P & P \\
P & O & O \\
P & P & P
\end{array}$
\caption{Truth tables for binary conjunctions on $\OP$}
\label{truthtables}
\end{center}
\end{figure}
Similarly, we may extend $\zeta$ to the implication $A\implies B$ and the par $A\parr B$ by setting
\begin{align*}
\zeta_{A\implies B}(s)=(\zeta_A(s\vert_A)\Rightarrow \zeta_B(s\vert_B))\\
\zeta_{A\parr B}(s) = (\zeta_A(s\vert_A) \vee \zeta_B(s\vert_B))
\end{align*}
Note that if we use these definitions then the owner $\zeta_C(sa)$ of a play $sa$ might not correspond to the player $\lambda_C(a)$ who played the last move $a$. For example, let $A,B$ be two positive games and form their tensor product $A\tensor B$. Then we have
\[
\zeta_{A\tensor B}(\emptyplay) = (\zeta_A(\emptyplay) \wedge \zeta_B(\emptyplay)) = O \wedge O = O
\]
and so $A\tensor B$ is a positive game. Player $P$ plays an opening move in one of the two games - let us say she plays the move $a$ in the game $A$. But then we have
\[
\zeta_{A\tensor B}(a) = (\zeta_A(a) \wedge \zeta_B(\emptyplay)) = P \wedge O = O
\]
In other words, it is still player $P$'s turn to play! Blass embrace this possibility and allows player $P$ to make these two moves. In his paper, he introduces the notions of \emph{strict} and \emph{relaxed} games, where the strict games are the objects of study but the relaxed games are often used since they allow more manipulations. In this case, the game $A\tensor B$ is defined as a relaxed game that might not satisfy the alternating condition; in the process of converting it into a strict game, these two opening moves by player $P$ are combined into a single move.
This `double move' can only occur at the start of the game, and Blass treats it as a special case in his proofs. Perhaps unsurprisingly, this inconsistency causes major problems if we try to compose strategies. We do not get an associative composition of strategies for $A\implies B$ with strategies for $B\implies C$ and so we do not get a categorical semantics. An example of the failure of associativity in Blass's games model is given towards the end of \cite{abramskyjagadeesangames}.
By contrast, Abramsky-Jagadeesan games may admit moves by both players at the same position (specifically, at the beginning of the game, before any moves have been played), but this does not cause problems since we insist that our legal plays start with an $O$-move and be strictly alternating. The authors of \cite{abramskyjagadeesangames} note that their model can be considered as an intermediate between Conway's games, where the position tells you nothing about which player is to move, and Blass games, where the position completely determines which player is to move. In Abramsky-Jagadeesan games, one can deduce which player is to move (by looking at which player made the last move) in every position except the empty starting position.
In the Abramsky-Jagadeesan model, a positive game is an immediate win to player $P$, since player $O$ has no legal move to start the game off. As we noted before, this does not mean that the content of a positive game is meaningless, since we can use connectives to `unlock' these illegal plays. For example, if $Q$ is a positive game and $N$ is a negative game then $Q\parr N$ is a negative game, and the possible positions in $Q$ are now all achievable.
Curien's game model (\cite{curiengames}) is similar to Abramsky's and Jagadeesan's, but involves only negative games. The only slight problem is that negative Abramsky-Jagadeesan games are not closed under implication: if $N,L$ are negative games then $N\implies L$ may be neither negative nor positive. We may fix this by modifying the definition of $N\implies L$ so that we delete from $P_{N\implies L}$ all plays that start with a $P$-move - or, equivalently, by requiring that all plays start in $L$. This is the approach taken in \cite{martinsthesis}, where it fits well with the paper's treatment of the \emph{sequoid} operator $\sequoid$, which is a version of the tensor product that has been modified so that play is required to start in the left-hand game.
Our approach will be similar to that of Curien's in that we will admit only negative games as objects of our category, but we will also incorporate elements of Blass's model: each position $s$ in our game will be designated either an $O$-position or a $P$-position via a function $\zeta$ and we will extend $\zeta$ to the connectives using the truth tables in Figure \ref{truthtables}. We shall start by developing the theory of finitary games in order to illustrate our approach, but it will not be until we start considering transfinite games that we see the value of this aspect of Blass's semantics: transfinite games may contain many non-empty plays that have no last move and we will want to designate them as $P$-positions or $O$-positions so that we know which player moves next.
\subsection{Our definition of games and strategies}
\begin{definition}
A \emph{game} is a triple $(M_A,\lambda_A,\zeta_A,P_A)$ where
\begin{itemize}
\item $M_A$ is a set of moves,
\item $\cmap{\lambda_A}{M_A}{\OP}$ is a function that assigns a player to each move,
\item $P_A\subset M_A^*$ is a non-empty prefix-closed set of plays that can occur in the game and
\item $\cmap{\zeta_A}{P_A}{\OP}$ is a function that assigns a player to each position
\end{itemize}
such that
\begin{itemize}
\item If $a\in M_A$ and $sa\in P_A$ then $\zeta_A(sa)=\lambda_A(a)$.
\item If $a\in M_A$ and $sa\in P_A$ then $\zeta_A(s)=\neg\zeta_A(sa)$.
\end{itemize}
\end{definition}
\begin{remark}[Notes on the definition]
Given a game $A=(M_A,\lambda_A,\zeta_A,P_A)$, define $b_A=\neg\zeta_A(\emptyplay)$. Then every play in $P_A$ must start with a $b_A$-move, so $A$ is either positive or negative.
Note that $\zeta_A$ is now completely specified by $\lambda_A$ and $b_A$, so we could have specified our games more efficiently by replacing $\zeta_A$ with $b_A$ in our definition, as done in \cite{martinsthesis}. The slightly more unwieldy $\zeta_A$ will be useful when we come to extend our games over the ordinals, though, so we retain it.
If $a\in M_A$ then we may recover $\lambda_A(a)$ from $\zeta_A$ so long as $a$ occurs in some play in $P_A$. Since moves that can never be played do not affect the game at all, we do not really need $\lambda_A$ in our definition, but we keep it to make the connection to earlier work clearer.
If $a\in M_A$ and $\lambda_A(a)=O$, we call $a$ an \emph{$O$-move}. If $\lambda_A(a)=P$, we call $a$ a \emph{$P$-move}. If $s\in P_A$ and $\zeta_A(s)=O$, we call $s$ an \emph{$O$-play} or \emph{$O$-position}. If $\zeta_A(s)=P$, we call $s$ a \emph{$P$-play} or \emph{$P$-position}.
\end{remark}
We give the usual definition of a strategy as an appropriate subset of $P_A$, where we have identified the strategy with the set of all plays that can arise when player $P$ plays according to that strategy:
\begin{definition}
Let $A=(M_A,\lambda_A,\zeta_A,P_A)$ be a game. A \emph{strategy} for $A$ is a non-empty prefix-closed subset $\sigma\subset P_A$ such that:
\begin{itemize}
\item If $a\in P_A$ is an $O$-move and $s\in\sigma$ is a $P$-position such that $sa\in P_A$, then $sa\in\sigma$.
\item If $s\in\sigma$ is an $O$-position and $a,b\in M_A$ are $P$-moves such that $sa,sb\in\sigma$, then $a=b$.
\end{itemize}
\end{definition}
The definition above will be the most technically useful, but it is also convenient to give a strategy by a partial function that tells player $P$ which move to make in each $O$-position. If we write $P_A^-=\zeta_A\inv(\{O\})$ for the set of all $O$-positions and $M_A^+=\lambda_A\inv(\{P\})$ for the set of all $P$-moves, then the strategy $\sigma$ above gives rise to a partial function $\hat\sigma\from P_A^-\pfun M_A^+$ given by:
\[
\hat\sigma(s)=
\begin{cases}
a & \textrm{if $a\in M_A^+$ and $sa\in P_A$}\\
\textrm{undefined} & \textrm{if $sb\not\in P_A$ for all $b\in M_A^+$}
\end{cases}
\]
The second condition on strategies tells us that this is well defined. Going in the other direction, if we are given a partial function $f\from P_A^-\pfun M_A^+$ then we can define a strategy $\overline{f}$ by
\[
\overline f = \{s\in M_A^*\suchthat \textrm{for all $ta\prefix s$ with $\zeta_A(t)=O$, $f(t)$ is defined and equal to $a$}\}
\]
\subsection{Connectives}
Our definitions of connectives on games are as in \cite{blassgames}.
\begin{definition}
Let $A=(M_A,\lambda_A,\zeta_A,P_A)$ be a game. The negation of $A$, $\neggame{A}$, is the game formed by interchanging the roles of the two players.
\begin{itemize}
\item $M_{\bneggame A}=M_A$
\item $\lambda_{\bneggame A} = \comp\neg{\lambda_A}$
\item $\zeta_{\bneggame A} = \comp\neg{\zeta_A}$
\item $P_{\bneggame A} = P_A$
\end{itemize}
\end{definition}
It follows immediately from the definitions that $\neggame A$ is a well formed game.
We now define the tensor product $A\tensor B$, the par $A\parr B$ and the linear implication $A\implies B$ of two games. All these games are obtained by playing $A$ and $B$ in parallel, so they all have the same set of moves:
\[
M_{A\tensor B} = M_{A\parr B} = M_{A\implies B} = M_A \cprd M_B
\]
Ownership of moves is decided via the obvious co-pairing functions:
\begin{IEEEeqnarray*}{c}
\lambda_{A\tensor B} = \lambda_{A\parr B} = \lambda_A \cprd \lambda_B\\
\lambda_{A\implies B} = (\neg\circ\lambda_A) \cprd \lambda_B
\end{IEEEeqnarray*}
We define the set $P_A\|P_B$ to be the set of all plays in $(M_A \cprd M_B)^*$ whose $M_A$-component is a play from $P_A$ and whose $M_B$-component is a play from $P_B$:
\[
P_A\|P_B = \{s\in (M_A\cprd M_B)^*\suchthat s\vert_A\in P_A,\; s\vert_B\in P_B\}
\]
We are now in a position to define $\zeta_{A\tensor B},\zeta_{A\parr B},\zeta_{A\implies B}$ as functions $P_A\|P_B\to\OP$:
\begin{IEEEeqnarray*}{rCl}
\zeta_{A\tensor B}(s) & = & \zeta_A(s\vert_A) \wedge \zeta_B(s\vert_B) \\
\zeta_{A\parr B}(s) & = & \zeta_A(s\vert_A) \vee \zeta_B(s\vert_B) \\
\zeta_{A\implies B}(s) & = & \zeta_A(s\vert_A) \Rightarrow \zeta_B(s\vert_B)
\end{IEEEeqnarray*}
(Here, $\wedge$, $\vee$ and $\Rightarrow$ are as in Figure \ref{truthtables}.)
As things stand, $P_A\|P_B$ is not a valid set of plays for our $\lambda$ and $\zeta$ functions. The first problem is that $P_A\|P_B$ contains plays which are not alternating with respect to $\zeta_{A\tensor B},\zeta_{A\parr B},\zeta_{A\implies B}$. The second problem is that the $\zeta$ and $\lambda$ functions do not always agree with one another. For example, suppose that $Q,R$ are two positive games. So we have $\zeta_Q(\emptyplay) = \zeta_R(\emptyplay) = O$. Then $\zeta_{Q\tensor R}(\emptyplay) = O\wedge O = O$. But now suppose that player $P$ can make an opening move $q$ in $Q$. Then we have
\[
\zeta_{Q\tensor R}(q) = \zeta_Q(q) \wedge \zeta_R(\emptyplay) = P \wedge O = O
\]
but $\lambda_{Q\tensor R}(q) = P$.
Our solution is to throw away some plays from $P_A\| P_B$ so that what remains satisfies the alternating condition and the condition on the $\lambda$ and $\zeta$ functions.
\begin{definition}
Let $M$ be a set, let $P\subset M^*$ be prefix closed and let $\cmap{\lambda}{M}{\OP},\cmap{\zeta}{P}{\OP}$ be functions. If $s\in P$, we say that $s$ is \emph{alternating with respect to $\zeta$} if the set of prefixes of $s$ satisfies the alternating condition: if $t,ta\prefix s$, then $\zeta(t) = \neg\zeta(ta)$.
We say that $s$ is \emph{well formed with respect to $\lambda,\zeta$} if whenever $ta\prefix s$, for some $a\in M$, we have $\zeta(ta)=\lambda(a)$.
Now let $A,B$ be games. We define:
\begin{IEEEeqnarray*}{rCl}
P_{A\tensor B} & = & \left\{s\in P_A\|P_B \;\middle|\; \mbox{\pbox{\textwidth}{$s$ is alternating with respect to $\zeta_{A\tensor B}$ \\ $s$ is well formed with respect to $\zeta_{A\tensor B},\lambda_{A\tensor B}$}}\right\} \\
P_{A\parr B} & = & \left\{s\in P_A\|P_B \;\middle|\; \mbox{\pbox{\textwidth}{$s$ is alternating with respect to $\zeta_{A\parr B}$ \\ $s$ is well formed with respect to $\zeta_{A\parr B},\lambda_{A\parr B}$}}\right\} \\
P_{A\implies B} & = & \left\{s\in P_A\|P_B \;\middle|\; \mbox{\pbox{\textwidth}{$s$ is alternating with respect to $\zeta_{A\implies B}$ \\ $s$ is well formed with respect to $\zeta_{A\implies B},\lambda_{A\implies B}$}}\right\}
\end{IEEEeqnarray*}
\end{definition}
\begin{definition}
We define:
\begin{IEEEeqnarray*}{cCc}
A\tensor B & = & (M_{A\tensor B}, \lambda_{A\tensor B}, \zeta_{A\tensor B}, P_{A\tensor B}) \\
A\parr B & = & (M_{A\parr B}, \lambda_{A\parr B}, \zeta_{A\parr B}, P_{A\parr B}) \\
A\implies B & = & (M_{A\implies B}, \lambda_{A\implies B}, \zeta_{A\implies B}, P_{A\implies B})
\end{IEEEeqnarray*}
\end{definition}
\begin{proposition}
$A\tensor B$,$A\parr B$,$A\implies B$ are well formed games. Moreover, $P_{A\tensor B}$ is the largest subset $X\subset P_A\|P_B$ such that $(M_{A\tensor B}, \lambda_{A\tensor B}, \zeta_{A\tensor B}, X)$ is a well formed game and similarly for the connectives $\parr$ and $\implies$.
\begin{proof}
We will prove the proposition for $A\tensor B$; the other two cases are entirely similar. Alternatively, observe that $A\parr B=\neggame{(\neggame A \tensor \neggame B)}$ and $A\implies B = \neggame A \parr B$.
For $A\tensor B$, it suffices to show that $P_{A\tensor B}$ is alternating with respect to $\zeta_{A\tensor B}$, since every $s\in P_{A\tensor B}$ is well-formed by definition. Suppose $s,sa\in P_{A\tensor B}$; then $s,sa\prefix sa$; since $sa$ is alternating with respect to $\zeta_{A\tensor B}$, it follows that $\zeta_{A\tensor B}(s) = \neg\zeta_{A\tensor B}(sa)$.
For the second part of the proposition, suppose that $V\subset P_A\|P_B$ is prefix-closed and satisfies the alternating condition with respect to $\zeta_{A\tensor B}$ and that every $s\in V$ is well-formed with respect to $\lambda_{A\tensor B},\zeta_{A\tensor B}$. We need to show that $V\subset P_{A\tensor B}$, for which it will suffice to show that every $s\in V$ is alternating. This is easy to see: since $V$ is prefix closed, the set of all prefixes of $s$ is a subset of $V$, and so it satisfies the alternating condition with respect to $\zeta_{A\tensor B}$.
\end{proof}
\end{proposition}
A design feature of the connectives $\tensor,\parr,\implies$ is that only player $O$ may switch games in $A\tensor B$, while only player $P$ may switch games in $A\parr B$ and $A\implies B$. The $\implies$ case follows immediately from the $\parr$ case by noting that $A\implies B = \neggame A\parr B$, and the $\parr$ case then follows from the $\tensor$ case by observing that $A\parr B=\neggame{(\neggame A \tensor \neggame B)}$. Thus, it will suffice to prove the following proposition for the tensor product:
\begin{proposition}
\label{WhoSwitchesGames}
Let $A,B$ be games. Suppose $s\in P_{A\implies B}$, $a\in M_A$ and $b\in M_B$. Then:
i) If $sab\in P_{A\tensor B}$ then $\lambda_{A\tensor B}(b)=O$
ii) If $sba\in P_{A\tensor B}$ then $\lambda_{\tensor B}(a)=O$.
\begin{proof}
(i): $\begin{aligned}[t]
\lambda_{A\tensor B}(b) & = \zeta_{A\tensor B}(sab) \\
& = \zeta_{A\tensor B}(s\vert_A a) \wedge \zeta_{A\tensor B}(s\vert_B b) \\
& = \lambda_{A\tensor B}(a) \wedge \lambda_{A\tensor B}(b)
\end{aligned}$
By alternation, either $\lambda_{A\tensor B}(a)=O$ or $\lambda_{A\tensor B}(b)=O$, so this last expression must be equal to $O$.
(ii): \noindent$\begin{aligned}[t]
\lambda_{A\tensor B}(a) & = \zeta_{A\tensor B}(sba) \\
& = \zeta_{A\tensor B}(s\vert_A a) \wedge \zeta_{A\tensor B}(s\vert_B b) \\
& = \lambda_{A\tensor B}(a) \wedge \lambda_{A\tensor B}(b) = O\textrm{ (by the same argument)} && \quad\;\qedhere
\end{aligned}$
\end{proof}
\end{proposition}
\subsection{A category of games and partial strategies}
\label{CategoricalSemantics}
Following \cite{joyalgames} and \cite{abramskyjagadeesangames}, we define a category $\G$ whose objects are games where the morphisms from a game $A$ to a game $B$ are strategies for $A\implies B$. For the sake of simplicity, and to avoid various technical issues, we shall require that the games in our category be \emph{negative} - namely, they should start with an opponent move. In our language, we call a game $A$ \emph{negative} if $\zeta_A(\emptyplay)=P$ (and we call it positive if $\zeta_A(\emptyplay)=O$). The equations
\begin{gather*}
P \wedge P = P \\
P \Rightarrow P = P
\end{gather*}
tell us that the class of negative games is closed under $\tensor$ and $\implies$ (since $P\vee P=P$, it is also closed under $\parr$, but the par of two negative games has no legal moves in our presentation, so it does not give a good model of the $\parr$ connective in linear logic and we will not consider it).
In order to get a category, we need a way to compose a strategy for $A\implies B$ with a strategy for $B\implies C$. Our treatment of composition is heavily influenced by work done by Hyland and Schalk (see \cite{hyland1997games}, \cite{hylandschalkgames} and the excellent set of notes \cite{Schalk2001GsNotes}).
We shall use the usual definition of composition. If $\sigma$ is a strategy for $A\implies B$ and $\tau$ is a strategy for $B\implies C$, we define a set
\[
\sigma\|\tau = \{\s\in (M_A \cprd M_B \cprd M_C)^*\suchthat \s\vert_{A,B}\in\sigma,\;\s\vert_{B,C}\in\tau\}
\]
Then we define the composite strategy on $A\implies C$ to be
\[
\comp\tau\sigma = \{\s\vert_{A,C}\suchthat\s\in\sigma\|\tau\}
\]
The process of showing that this is indeed a strategy for $A\implies C$, and that composition is associative, will be quite involved. We shall start by recording some nice properties of negative games that will tell us that $\comp\tau\sigma$ is alternating:
\begin{proposition}
Let $A,B$ be negative games and suppose that $s\in P_A\|P_B$ is alternating with respect to $\zeta_{A\tensor B}$. Then:
\begin{enumerate}[i)]
\item Either $\zeta_{A}(s\vert_A)=P$ or $\zeta_{B}(s\vert_B)=P$.
\item $s$ is well formed with respect to $\zeta_{A\tensor B},\lambda_{A\tensor B}$.
\end{enumerate}
Furthermore, (i) and (ii) hold if we replace $\tensor$ with $\implies$ throughout and if we replace (i) by the statement that either $\zeta_A(s\vert_A)=P$ or $\zeta_B(s\vert_B)=O$.
\begin{proof}
See Appendix \ref{nice-negative-games-proof}.
\end{proof}
\label{nice-negative-games}
\end{proposition}
We gave an example earlier in which a play $sa$ in $P_A\|P_B$ was not well formed with respect to $\zeta_{A\tensor B},\lambda_{A\tensor B}$. The crucial thing that made that example work was that we had $\zeta_A(s\vert_A)=\zeta_B(s\vert_B)=O$. This then meant that $\zeta_{A\tensor B}(sa)=O$ even when $\lambda_{A\tensor B}=P$. Part (i) of the proposition above tells us that this situation never occurs, and part (ii) tells us that this is enough to ensure that we never get any alternating sequences that are not well formed.
A corollary of our result is that we have:
\begin{align*}
P_{A\tensor B} & = \{s\in P_A\|P_B\suchthat \textrm{$s$ is alternating with respect to $\zeta_{A\tensor B}$}\} \\
P_{A\implies B} & = \{s\in P_A\|P_B\suchthat \textrm{$s$ is alternating with respect to $\zeta_{A\implies B}$}\}
\end{align*}
if $A,B$ are negative games. In other words, we can ignore the well-formedness condition on plays if we are dealing with only negative games.
This proposition allows us to prove the following useful fact:
\begin{lemma}
\label{alternatingsequenceslemma}
Let $A,B,C$ be negative games, and let $\s\in (M_A\cprd M_B\cprd M_C)^*$. If any two of the following statements are true, then so is the third:
\begin{gather*}
\s\vert_{A,B}\in P_{A\implies B}\\
\s\vert_{A,C}\in P_{A\implies C}\\
\s\vert_{B,C}\in P_{B\implies C}
\end{gather*}
\begin{proof}
See Appendix \ref{alternatingsequenceslemmaproof}.
\end{proof}
\end{lemma}
Now we can move on to the main work. We start with a surprising lemma about strategies on the implication $A\implies B$.
\begin{lemma}
\label{PlayInStrategyDeterminedByComponents}
Let $A,B$ be games and let $\sigma$ be a strategy for $A\implies B$. Suppose that $s,t\in\sigma$ are such that $s\vert_A=t\vert_A$ and $s\vert_B=t\vert_B$. Then $s=t$.
\begin{proof}
Suppose that $s\ne t$. Let $r\prefix s,t$ be the longest common subsequence of $s$ and $t$ - so we have $rx\prefix s, ry\prefix t$ for some moves $x,y\in M_{A\implies B}$ with $x\ne y$. Since $rx,ry$ are both substrings of the strategy $\sigma$, $r$ must be a $P$-position and therefore the moves $x,y$ must take place in the same game. Without loss of generality, suppose that $x,y\in M_A$. Then we have $r\vert_A x\prefix s\vert_A,r\vert_Ay\prefix t\vert_A$ and so $s\vert_A\ne t\vert_A$.
\end{proof}
\end{lemma}
Thus we see that the plays according to some given strategy $\sigma$ for $A\implies B$ are characterized by their $A$- and $B$-components. More is true, however: given games $A$ and $B$, the strategies on $A\implies B$ are characterized by the $A$- and $B$-components of their constituent plays:
\begin{lemma}
\label{HylandSchalkFaithful}
Let $A,B$ be games and let $\sigma,\sigma'$ be strategies for $A\implies B$ such that
\[
\{(s\vert_A,s\vert_B)\suchthat \textrm{$s\in\sigma$ is a $P$-position}\}=\{(s'\vert_A,s'\vert_B)\suchthat\textrm{$s'\in\sigma'$ is a $P$-position}\}
\]
Then $\sigma=\sigma'$.
\begin{proof}
See Appendix \ref{HylandSchalkFaithfulProof}.
\end{proof}
\end{lemma}
Given games $A,B$ and a strategy $\sigma$ for $A\implies B$, the set
\[
\{(s\vert_A,s\vert_B)\suchthat \textrm{$s\in\sigma$ is a $P$-position}\}\subset P_A\times P_B
\]
is a \emph{relation} between $P_A$ and $P_B$. Writing $\grel\sigma\subset P_A\times P_B$, we will show that if $A,B,C$ are games and $A\xrightarrow{\sigma}\map{\tau}{B}{C}$ are morphisms, then:
\[
\grel{\comp\tau\sigma}=\comp{\grel\tau}{\grel\sigma}
\]
under the usual composition of relations:
\[
\comp{\grel\tau}{\grel\sigma} = \{(s,t)\in P_A\times P_C\suchthat \exists u\in P_B\esuchthat (s,u)\in\grel\sigma,\;(u,t)\in\grel\tau\}
\]
Thus we will get a functor $\cmap{\F}{\G}{\Rel}$, where $\G$ is our category of negative games and $\Rel$ is the category of sets and relations. Proposition \ref{HylandSchalkFaithful} then tells us that this functor is faithful.
It is clear from the definition of $\comp\tau\sigma$ that $\grel{\comp\tau\sigma}\subset\comp{\grel\tau}{\grel\sigma}$; indeed if $\s\in\sigma\|\tau$, then $(\s\vert_A,\s\vert_B)\in\grel\sigma$ and $(\s\vert_B,\s\vert_C)\in\grel\tau$, so $(\s\vert_A,\s\vert_C)\in\comp{\grel\tau}{\grel\sigma}$. To show the reverse inclusion, we need to show the following: if $s\in\sigma$ and $t\in\tau$ are such that $s\vert_B=t\vert_B$, then there is some $\s\in (M_A\cprd M_B\cprd M_C)^*$ such that $\s\vert_{A,B}=s$ and $\s\vert_{B,C}=t$.
This can be shown in an easy way. Roughly speaking, we write out the sequence $s$, and then insert immediately after each $B$-move the $C$-moves from $t$ that occur after that $B$-move. This technique works for arbitrary sequences and is sufficient to prove that $\grel{\comp\tau\sigma}=\comp{\grel\tau}{\grel\sigma}$. However, it turns out that if $s$ and $t$ are alternating plays in $A\implies B$ and $B\implies C$ then this interleaving is unique - and even more is true:
\begin{lemma}
\label{LiftingLemma}
Let $A,B,C$ be negative games, let $\sigma$ be a strategy for $A\implies B$ and let $\tau$ be a strategy for $B\implies C$. Suppose that $s\in\sigma$ and $t\in\tau$ are such that $s\vert_B=t\vert_B$. Let $n=\length(s)+\length(t\vert_C)$; i.e., the total number of unique terms of $s$ and $t$ after we have identified their common $B$-component. Then for all $k$ with $0\le k\le n$, there is a unique sequence $\s^k\in\sigma\|\tau$ of length $k$ such that $\s^k\vert_A\prefix s\vert_A$ and $\s^k\vert_C\prefix t\vert_C$. Moreover, for this $\s^k$ we have $\s^k\vert_{A,B}\prefix s$ and $\s^k\vert_{B,C}\prefix t$, and if $j\le k$ then $\s^j\prefix \s^k$.
\begin{proof}
See Appendix \ref{LiftingLemmaProof}.
\end{proof}
\end{lemma}
Note that Lemma \ref{LiftingLemma} is stronger than saying that there is a unique sequence $\s^k$ of length $k$ such that $\s^k\vert_{A,B}\prefix s$ and $\s^k\vert_{B,C}\prefix t$. Instead, we are saying that there exists a sequence $\s^k$ with those properties, but that it is uniquely determined even when we make no requirement on its $B$-component other than that the whole sequence is contained in $\sigma\|\tau$.
As a corollary, we see that the sequences $s$ and $t$ may be interleaved: set $\s=\s^n$; then we have $\s\vert_{A,B}\prefix s$ and $\s\vert_{B,C}\prefix t$. In fact, we have $\s\vert_{A,B}=s$ and $\s\vert_{B,C}=t$ by a length argument:
\[
\length(\s\vert_{A,B})\le\length(s)=n-\length(t\vert_C)\le n-\length(\s\vert_C)=\length(\s\vert_{A,B})
\]
so we must have equality everywhere, and similarly for $\s\vert_{B,C}$ and $t$.
We now have all the ingredients ready to prove that $\comp\tau\sigma$ is a well-defined strategy.
\begin{proposition}
Let $A,B,C$ be games, let $\sigma$ be a strategy for $A\implies B$ and let $\tau$ be a strategy for $B\implies C$. Then $\comp\tau\sigma$ (as defined above) is a strategy for $A\implies C$.
\begin{proof}
By Lemma \ref{alternatingsequenceslemma}, $\comp\tau\sigma\in P_{A\implies C}$. Moreover, it is non-empty, since it contains $\emptyplay$. We need to show that the two strategy conditions hold for $\comp\tau\sigma$.
Firstly, suppose that $\s\in\sigma\|\tau$ and $\s\vert_{A,C}$ is a $P$-play in $A\implies C$. Suppose that $\s\vert_{A,C} a\in P_{A\implies C}$ for some $a\in M_A$. We claim that $\s a\in\sigma\|\tau$.
Indeed, certainly $\s a\vert_{B,C}=\s\vert_{B,C}\in\tau$. Moreover, we have $\s a\vert_{A,C}\in P_{A\implies C}$, so Lemma \ref{alternatingsequenceslemma} tells us that $\s a\vert_{A,B}\in P_{A\implies B}$.
Since $a$ is an $O$-move in $A\implies B$, it is an $O$-move in $A\implies C$. Since $\s\vert_{A,B}\in\sigma$, we must have $\s a\vert_{A,B}\in\sigma$, by the definition of a strategy. So $\s a\in\sigma\|\tau$.
By a symmetrical argument, if $\s\vert_{A,C} c\in P_{A\implies C}$ for some $c\in M_C$, then $\s c\in\sigma\|\tau$.
For the second strategy property, suppose that $s\in\comp\tau\sigma$ is n $O$-play and that $sx, sy\in\comp\tau\sigma$. We claim that $x=y$. By the definition of $\comp\tau\sigma$, there must be sequences $\s,\t\in\sigma\|\tau$ with $\s\vert_{A,C}=sx$ and $\t\vert_{A,C}=sy$. Moreover, by removing any $B$-moves form the end of $\s,\t$, we may assume that the last move in $\s$ is $x$ and the last move in $\t$ is $y$. We may write $\s=\s'\b x$ and $\t=\t'\bbeta y$, where $\b$ and $\bbeta$ are sequences composed entirely out of $B$-moves and neither $\s'$ nor $\t'$ ends with a $B$-move. In particular, $\s'\vert_{A,C}=\t'\vert_{A,C}=s$.
Without loss of generality, $\length(\s')\le\length(\t')$. Let $\t''$ be the prefix of $\t'$ that has the same length as $\s'$, writing $\t'=\t''\r$.
Now $\s'$ and $\t''$ have the same length, and we have
\[
\begin{matrix}
\s'\vert_A\prefix\s\vert_A & \t''\vert_A\prefix\s\vert_A \\
\s'\vert_C\prefix\s\vert_C & \t''\vert_C\prefix\s\vert_C
\end{matrix}
\]
so we must have $\s'=\t''$, by uniqueness in Lemma \ref{LiftingLemma}. Now observe that we have
\[
\s'\vert_{A,C}=\t'\vert_{A,C}=\t''\vert_{A,C}\r\vert_{A,C}=\s'\vert_{A,C}\r\vert_{A,C}
\]
Since $\t'=\t''\r$ does not end with a $B$-move, $\r\vert_{A,C}$ must be non-empty if $\r$ is non-empty. Therefore $\r=\emptyplay$, and so $\t'=\t''=\s'$.
We now claim that $\length(\b)=\length(\bbeta)$. Indeed, suppose instead that $\length(\b)<\length(\bbeta)$. Then there would be a prefix $\bbeta'\prefix\bbeta$ of length $\length(\b)+1$. Then we would have
\[
\begin{matrix}
\s'\b x\vert_A=\s\vert_A & \s'\bbeta'\vert_A=\s'\vert_A\prefix\s\vert_A \\
\s'\b x\vert_C=\s\vert_C & \s'\bbeta'\vert_C=\s'\vert_C\prefix\s\vert_C
\end{matrix}
\]
and so $\s'\b x=\s'\bbeta'$ by Lemma \ref{LiftingLemma}, which is clearly impossible. Therefore, $\length(\b)=\length(\bbeta)$. Now we can apply Lemma \ref{LiftingLemma} in the same way to tell us that $\b=\bbeta$.
Suppose that $\zeta_B(\s'\b\vert_B)=P$. Since $\s'\b\in\sigma\|\tau$, this means that $\zeta_A(\s'\b\vert_A)=P$, by Proposition \ref{nice-negative-games}(i). Therefore, $\zeta_{A\implies B}(\s'\b)=P$, so $x$ and $y$ must both be moves in $C$. Then we have $\s'\b\vert_{B,C}x,\s'\b\vert_{B,C}y\in\tau$, and so $x=y$ by the definition of a strategy.
By a symmetrical argument, if $\zeta_B(\s'\b\vert_B)=O$, then $x$ and $y$ must both be moves in $A$. We then have $\s'\b\vert_{A,B}x,\s'\b\vert_{A,B}y\in\sigma$, so $x=y$.
\end{proof}
\end{proposition}
Now we are ready to define our category $\G$ of games. We have defined composition of strategies, so we need to define identity morphisms. Identity morphisms are given by the copycat strategy, as usual:
\[
\id_A=\left\{s\in P_{A\implies A}\suchthat\textrm{for all even length $t\prefix s$, $t\vert_\bneggame{A}=t\vert_A$}\right\}
\]
We can now state our first main result.
\begin{theorem}
\label{IsCategory}
The collection of negative games, with morphisms from $A$ to $B$ given by strategies on $A\implies B$ with the notions of composition and identity given above, forms a category $\G$. Moreover, there is a faithful functor $\F\from\G\to\Rel$, where $\Rel$ is the category of sets and relations, given by sending a game $A$ to the set of plays $P_A$ and sending a strategy $\sigma\from A\implies B$ to the set
\[
\grel{\sigma}=\{(s\vert_A,s\vert_B)\suchthat \textrm{$s\in\sigma$ is a $P$-position}\}\subset P_A\times P_B
\]
\begin{proof}
We shall make much use of the relational content of a strategy. We first need to show that the identity is indeed an identity and that composition is associative. For the identity, note that $\grel{\id_A}=\Delta_{P_A}\subset P_A\times P_A$, the identity relation in $\Rel$. If $B,C$ are negative games and $\sigma\colon B\implies A,\tau\colon A\implies C$ are strategies, then from our earlier discussion we have
\[
\grel{\comp{\id_A}{\sigma}}=\comp{\grel{\id_A}}{\grel\sigma}=\comp{\Delta_{P_A}}{\grel\sigma}=\grel\sigma
\]
and therefore $\comp{\id_A}{\sigma}=\sigma$, by Lemma \ref{HylandSchalkFaithful}. Similarly, $\comp{\tau}{\id_A}=\tau$.
For associativity of composition, we hijack the associativity in $\Rel$: if $A,B,C,D$ are negative games, and $\map{\sigma}AB\xrightarrow{\tau}\map{\upsilon}CD$ are morphisms, then we have:
\begin{align*}
\grel{\comp{(\comp\upsilon\tau)}{\sigma}} & = \comp{\grel{\comp\upsilon\tau}}{\sigma} \\
& = \comp{(\comp{\grel\upsilon}{\grel\tau})}{\grel\sigma} \\
& = \comp{\grel\upsilon}{(\comp{\grel\tau}{\grel\sigma})} \quad\textrm{by associativity in $\Rel$} \\
& = \comp{\grel\upsilon}{\grel{\comp\tau\sigma}} \\
& = \grel{\comp{\upsilon}{(\comp\tau\sigma)}}
\end{align*}
and so $\comp{(\comp\upsilon\tau)}{\sigma}=\comp{\upsilon}{(\comp\tau\sigma)}$, by Lemma \ref{HylandSchalkFaithful}.
We have already shown that the map $\F$ respects composition, so it is a functor. It is faithful by Lemma \ref{HylandSchalkFaithful}.
\end{proof}
\end{theorem}
\subsection{Total strategies and winning conditions}
Let $A$ be a game. We call a strategy $\sigma$ for $A$ \emph{total} if it satisfies the additional requirement that whenever $s\in\sigma$ is an $O$-play, there exists some $a\in M_A$ such that $sa\in\sigma$ (before we only required that such a play be unique if it existed).
We might want to talk about a category of negative games and total strategies. However, we cannot do this immediately, since the composition of two total strategies need not be total.
\begin{example}
\label{TotalCompositionExample}
Let $\st$ be the negative game where each player has a unique move at every position:
\begin{gather*}
M_{\st} = \{q,a\} \\
\lambda_{\st}(q) = O,\;\lambda_{\st}(a)=P \\
P_{\st} = \{\emptyplay, q, qa, qaq, qaqa, \dots\}
\end{gather*}
Let $I$ be the negative game with no moves and let $\bot$ be the negative game with a single opponent move and no further play:
\[
\begin{matrix}
M_I=\emptyset & M_\bot = \{*\} \\
\lambda_I=\emptyset & \lambda_\bot(*)=O \\
P_I=\{\emptyplay\} & P_\bot = \{\emptyplay, *\}
\end{matrix}
\]
It is easy to see that if $A$ is a negative game then strategies for $I\implies A$ are the same thing as strategies for $A$ and that strategies for $A\implies\bot$ are the same thing as opponent strategies for $A$. So we get morphisms $\cmap{\sigma}{I}{\st},\cmap{\tau}{\st}{\bot}$ given by the unique player and opponent strategies for $\st$:
\[
\sigma = P_\st\quad\tau = \{*s\suchthat s\in P_\st\}
\]
Moreover, both these strategies are total.
The plays in $\sigma\|\tau$ are now those sequences $\s\in(M_I\cprd M_\st\cprd M_\bot)^*$ such that:
\begin{gather*}
\s\vert_A=\emptyplay \\
\s\vert_B\in P_\st \\
\textrm{If $\s\ne\emptyplay$ then $\s\vert_C=*$}
\end{gather*}
Upon restricting these sequences to $A$ and $C$, we see that $\comp\tau\sigma$ contains only two plays - $\emptyplay$ and $*$. $*$ is an $O$-play with no $P$-reply in $\comp\tau\sigma$, so $\comp\tau\sigma$ is not a total strategy.
\end{example}
Clearly it is the infinite play in the game $\st$ that is causing the problem. We can get around this by insisting that our games contain no infinite plays: if $A$ is a game, we say $A$ has an infinite play if $(P_A,\prefix)$ has an infinite chain.
\begin{proposition}
\label{BoundedTotalComposition}
Let $A,B,C$ be negative games and assume that $B$ has no infinite play. Let $\sigma$ be a total strategy for $A\implies B$ and let $\tau$ be a total strategy for $B\implies C$. Then $\comp\tau\sigma$ is a total strategy for $A\implies C$.
\begin{proof}
We have already shown that $\comp\tau\sigma$ is a strategy for $A\implies C$, so we need to show that it is total. Let $s\in\comp\tau\sigma$ be an $O$-play. We seek a $P$-move $x$ such that $sx\in\comp\tau\sigma$.
By the definition of $\comp\tau\sigma$, there is some $\s\in\sigma\|\tau$ such that $\s\vert_{A,C}=s$. Since $P_B$ contains no infinite chain, we may take $\s$ to be maximal such that $\s\vert_{A,B}=s$.
Since $s$ is an $O$-play, $\s\vert_A$ must be a $P$-play in $A$ and $\s\vert_C$ must be an $O$-play in $C$. Consider the $B$-component $\s\vert_B$ of $\s$. If this is an $O$-play, then $\s\vert_{A,B}$ is an $O$-play in $\sigma$, so, by totality of $\sigma$, there exists $x$ such that $\s\vert_{A,B}x\in\sigma$ and therefore $\s x\in\sigma\|\tau$. Similarly, if $\s\vert_B$ is a $P$-play, then $\s\vert_{B,C}$ is an $O$-play in $\tau$, so there exists $x$ such that $\s\vert_{B,C}x\in\tau$ and so $\s x\in\sigma\|\tau$.
If $x$ is a $B$-move, we have $\s x\vert_{A,C}=s$, contradicting maximality of $s$. Therefore, $x$ is an $A$- or $C$-move, and we have $sx\in\comp\tau\sigma$.
\end{proof}
\end{proposition}
The result of this proposition means that we could create a category of games and total strategies by requiring that all our games have no infinite plays. However, this approach will not work for us, since our particular version of the exponential connective $\oc$ introduces infinite plays into all non-empty games. Moreover, it seems a bit wasteful to require that none of the games $A,B,C$ have infinite plays when it is only infinite plays in game $B$ that are causing problems.
Fortunately, there is a cleverer way of getting around the problem. Recall that we defined a game to be given by a tuple $(M_A,\lambda_A,\zeta_A,P_A)$. Given such a game, and given some subset $X\subset P_A$, define $X^\omega$ to be the set of all infinite plays in the game -- i.e., the set of all infinite sequences $s\in M_A^\omega$ all of whose prefixes lie in $X$.
\begin{definition}
A \emph{win-game} is a tuple $A=(M_A,\lambda_A,\zeta_A,P_A,W_A)$ where $(M_A,\lambda_A,\zeta_A,P_A)$ is a game as defined before and $\cmap{W_A}{P_A^\omega}{\OP}$ is a function telling us whether each infinite play is a win for player $P$ or for player $O$.
\end{definition}
To distinguish our original games from these win-games, we shall refer to the earlier games as \emph{finitary games}.
We can define an appropriate notion of strategies for these games. If $A$ is a win-game, then a \emph{winning strategy} for $A$ is a total strategy for $(M_A,\lambda_A,\zeta_A,P_A)$ such that $W_A(s)=P$ for all $s\in\sigma^\omega$.
We can also extend our connectives to our new win games. At the level of finitary games, our definitions of the connectives are unchanged, and it remains to define the $W$-functions. If $A$ is a win game, then $W_{\neggame A}=\neg\circ W_A$.
If $A$ and $B$ are win games, we would like to define $W_{A\tensor B}$ by
\[
W_{A\tensor B}(s) = W_A(s\vert_A) \wedge W_B(s\vert_B)
\]
However, this is not well defined at present because one out of $s\vert_A$ and $s\vert_B$ might be a finite sequence even if $s$ is infinite. For this purpose, it will be convenient to extend our function $W_A$ to include finite plays as well using $\zeta_A$:
\[
W_A^* = \zeta_A\cprd W_A\from P_A\cprd P_A^\omega \to\OP
\]
We can then define
\[
W_{A\tensor B}(s) = W_A^*(s\vert_A)\wedge W_B^*(s\vert_B)
\]
Indeed, we could do away with $\zeta_A$ and $W_A$ altogether, replacing them with $W_A^*$. Since the rule for $\zeta_{A\tensor B}$ is also given by a conjunction, our rule for the tensor product becomes:
\[
W_{A\tensor B}^*(s) = W_A^*(s\vert_A)\wedge W_B^*(s\vert_B)
\]
This will be the starting point for our development of transfinite games in the next section. In our formulation, finitary games are games played over the ordinal $\omega$, while win-games will be games played over the ordinal $\omega+1$. We make no special distinction between finite and infinite plays, so the function $\zeta_A$ for our transfinite games incorporates both the function $\zeta_A$ and the function $W_A$ from our win-games, in the same manner as the function $W_A^*$ defined above. One advantage of this point of view is that we no longer have to treat the finite- and infinite-play cases separately, but can give a uniform presentation.
For now, let us concentrate on developing the theory of win-games in the traditional way. Having defined the tensor product, we may recover the definitions of the par $\parr$ and linear implication $\implies$ of two games as usual. In particular, this means that we have
\[
W_{A\implies B}^*(s) = (W_A(s\vert_A) \Rightarrow W_B(s\vert_B))
\]
It is worth examining the definitions of the $W$-functions for tensor and implication. Recall that in the tensor product, it is always player $O$ who switches games. Therefore, if $s$ is an infinite play in $A\tensor B$ and $s\vert_A$ is finite and non-empty, it must be the case that player $P$ made the last move in $s$, and so $W_A^*(s\vert_A)=P$. If we assume our games are negative, then this extends to the case that $s\vert_A$ is empty. This tells us that player $P$ wins an infinite play $s$ in $A\tensor B$ if and only if each component is either $P$-winning or is finite.
Similarly, player $P$ wins an infinite play $s$ in $A\implies B$ if and only if $s\vert_A$ is infinite and $O$-winning or if $\s\vert_B$ is either finite or is infinite and $P$-winning. Since it is player $P$ who switches games in $A\implies B$, if $s\vert_A$ is finite then it will end with an $O$-move, so will count as an $O$-winning play.
The point now is that if $A,B,C$ are negative win-games and $\sigma$ and $\tau$ are strategies for $A\implies B$ and $B\implies C$, we will never end up in the situation we were in in Example \ref{TotalCompositionExample} where we have an infinite play in $B$ that stops us from replying in the composition $\comp\tau\sigma$. In that example, the unique infinite play in $\st$ would have to be either $O$-winning or $P$-winning, so either $\sigma$ or $\tau$ would not be a valid winning strategy.
\begin{proposition}
\label{CompWinningIsWinning}
Let $A,B,C$ be negative win-games, let $\sigma$ be a winning strategy for $A\implies B$ and let $\tau$ be a winning strategy for $B\implies C$. Then $\comp\tau\sigma$ is a winning strategy for $A\implies C$.
\begin{proof}
See Appendix \ref{CompWinningIsWinningProof}.
\end{proof}
\end{proposition}
We get another result:
\begin{theorem}
The collection of all negative win-games, with morphisms from $A$ to $B$ given by winning strategies on $A\implies B$ and composition defined as above, is a category.
\begin{proof}
We have shown already that the composition of two winning strategies is a winning strategy. Associativity of composition is inherited from finitary games and the identity strategy is winning for $A\implies A$.
\end{proof}
\end{theorem}
\section{Transfinite games}
\begin{note}
In this section, we assume familiarity with ordinals, ordinal arithmetic and transfinite induction/recursion. We shall frequently identify an ordinal $\alpha$ with the set $\{\beta\suchthat\beta<\alpha\}$ of all ordinals less than $\alpha$ without comment.
\end{note}
In the last section, we came across win-games, where in addition to the usual function $\cmap{\zeta_A}{P_A}{\OP}$ we have a function $\cmap{W_A}{P_A^\omega}{\OP}$, where $P_A^\omega$ is the set of infinite limits of plays in $P_A$. We saw that it is useful to combine these two functions into one:
\[
W_A^* = \zeta_A\cprd W_A\from P_A\cprd P_A^*\to\OP
\]
This suggests that it might make sense to combine $P_A$ and $P_A^\omega$ into one a single set $\overline{P_A}$ and use the function $W_A^*$ instead of $\zeta_A$ and $W_A$. This new set will contain both finite and infinite plays, and the infinite plays will be precisely the limits of the finite plays. In this way, the set of possible lengths of plays in $\overline{P_A}$ is the set $\{0,1,2,3,\dots,\omega\}$, or the ordinal $\omega+1$.
Contrast this with the finitary case, in which the set of possible lengths of plays is the set $\{0,1,2,3,\dots\}$, or the ordinal $\omega$. In the coming sections, we will generalize these results so that we end up with games where the plays can have much larger ordinal lengths. Win-games will be taken to be the motivating example throughout.
\subsection{Transfinite Games and Strategies}
We fix some ordinal $\alpha$ throughout. Given a set $M$, we write $M^{*<\alpha}$ for the set of all ordinal-length sequences of elements of $M$ whose length is less than $\alpha$.
\begin{definition}
A \emph{game over $\alpha$}, or an \emph{$\alpha$-game}, is given by a tuple
\[
A = (M_A, \lambda_A, \zeta_A, P_A)
\]
where
\begin{itemize}
\item $M_A$ is a set of moves.
\item $\cmap{\lambda_A}{M_A}{\OP}$ is a function telling us which player may make each move.
\item $P_A$ is a set of pairs $(\beta, s)$, where $\beta<\alpha$ is some ordinal and $\cmap{s}{\beta}{M_A}$ is a sequence of moves of length $\beta$.
\item $\cmap{\zeta_A}{P_A}{\OP}$ tells us which player owns each position.
\end{itemize}
We will normally abuse notation and write $s$ instead of $(\beta, s)$ and $\length(s)$ instead of $\beta$. If $a\in M_A$, we will write $a$ for the $1$-move play $(1, a)$, and $st$ for the concatenation of two sequences $s$ and $t$ (where $\length(st)=\length(s)+\length(t)$). We will write $t\prefix s$ if $\length(t)\le\length(s)$ and $s\vert_{\length(t)}=t$.
To be called a game, $(M_A, \lambda_A, \zeta_A, P_A)$ has to satisfy four rules.
\begin{description}
\item[Prefix closure] $P_A$ must be closed under the relation $\prefix$.
\item[Well-formedness] If $a\in M_A$ and $sa\in P_A$ then $\zeta_A(sa)=\lambda_A(a)$.
\item[Alternating condition] If $a\in M_A$ and $sa\in P_A$ then $\zeta_A(s)=\neg\zeta_A(sa)$.
\item[Limit condition] If $\mu<\alpha$ is a limit ordinal and $s\in(M_A)^\mu$ is a sequence of length $\mu$ such that every proper prefix $t\pprefix s$ is in $P_A$, then $s$ is in $P_A$.
\end{description}
\end{definition}
\begin{remark}
In the $\omega+1$ case, the limit condition tells us that the plays of length $\omega$ in $P_A$ are precisely those infinite plays all of whose finite prefixes lie in $P_A$. It also guarantees that $P_A$ is non-empty, since the empty play $\emptyplay$ will always satisfy the conditions.
In the $\omega+1$ case, $\zeta_A$ will take the role of what we previously called $W_A^*$, while $P_A$ takes the role of what we called $P_A\cprd P_A^\infty$.
\end{remark}
We define strategies for these games in much the same way that we defined them before:
\begin{definition}
Let $A$ be an $\alpha$-game. A (partial) \emph{strategy} for $A$ is a subset $\sigma\subset P_A$ satisfying three conditions:
\begin{description}
\item[$\sigma$ contains all $O$-replies] If $s\in\sigma$ is a $P$-play and $sa\in P_A$ is an $O$-response, then $sa\in\sigma$.
\item[$P$-replies are unique] If $s\in\sigma$ is an $O$-play and $sa,sb\in\sigma$ are $P$-responses, then $a=b$.
\item[Limit condition] If $\mu<\alpha$ is a limit ordinal and $s\in P_A$ is a sequence of length $\mu$ such that every proper prefix $t\pprefix s$ lies in $\sigma$, then $s\in\sigma$.
\end{description}
We call $\sigma$ \emph{total} if it satisfies the extra requirement:
\begin{description}
\item[$P$ always has a reply] If $s\in\sigma$ is an $O$-play, then there is some $P$-move $a\in M_A$ with $sa\in\sigma$.
\end{description}
\end{definition}
The only new part of this is the limit condition. Let us examine what it means in the $\omega+1$ case. Play according to a total strategy $\sigma$ gives rise to a sequence of plays $\emptyplay \prefix s_1\prefix s_2\prefix\dots$ (were $s_i$ has length $i$). The limit condition then tells us that the limit $s_\omega$ of these plays must be contained in $\sigma$. If $\zeta_A(s_\omega)=O$ then $P$ has no response to $s_\omega$ (since there are no plays of length $\omega+2$), contradicting totality of $\sigma$. So for $\sigma$ to be a total strategy, we must have $\zeta_A(s_\omega)=P$ for every $s_\omega$ arising as the limit of finite plays according to $\sigma$. So total strategies for $\omega+1$-games are exactly the same thing as the winning strategies for win-games that we defined earlier.
If $s\in P_A$ and $\length(s)$ is a successor ordinal, we will call $s$ a \emph{successor play} or \emph{successor position}. If $\length(s)$ is a limit ordinal, we will call $s$ a \emph{limiting play} or \emph{limiting position}.
\subsection{Connectives}
We may define connectives for $\alpha$-games in much the same way that we defined them for finitary games and win-games.
\begin{definition}
Let $A$ be an $\alpha$-game. The negation $\neggame A$ of $A$ is given by
\begin{itemize}
\item $M_\bneggame A=M_A$
\item $\lambda_\bneggame A = \neg\circ\lambda_A$
\item $\zeta_\bneggame A = \neg\circ\zeta_A$
\item $P_\bneggame A = P_A$
\end{itemize}
\end{definition}
For the other connectives -- $\tensor$, $\parr$ and $\implies$ -- we will use interleaved plays. If $A$ and $B$ are $\alpha$ games, then our interleaved plays will be sequences taking values in $M_A\cprd M_B$. If $\beta<\alpha$ and $s\in(M_A\cprd M_B)^\beta$, we may write $\beta_A$ for $s\inv(M_A)$ and $\beta_B$ for $s\inv(M_B)$.
\begin{lemma}
\label{TechnicalOrdinalLemma}
Let $\beta$ be an ordinal, and let $\gamma$ be any subset of $\beta$. Then $\gamma$ inherits a well-ordering from $\beta$ and, as ordinals, we have $\gamma\le\beta$.
\begin{proof}
Any subset of $\gamma$ is a subset of $\beta$, so it has a least element. Therefore, the inherited order on $\gamma$ is a well-order.
Suppose for a contradiction that $\beta<\gamma$. Then we have order preserving injections
\[
i\from \gamma\hookrightarrow\beta\quad j\from\beta\hookrightarrow\gamma
\]
where $i$ is inclusion of $\gamma$ as a subset of $\beta$ and $j$ is inclusion of $\beta$ as a proper initial prefix of $\gamma$. Let $f\from\gamma\hookrightarrow\gamma$ be the composition $\gamma\xhookrightarrow{i}\beta\xhookrightarrow{j}\gamma$. Then $f$ is order preserving.
Let $S=\{\delta\in\gamma\suchthat f(\delta)<\delta\}$. $S$ is non-empty, since $\beta\in S$ (considering $\beta$ as an element of $\gamma$), so $S$ has a least element $\xi$. Now $f(\xi)<\xi$, so $f(f(\xi))<f(\xi)$, since $f$ is order-preserving and injective. Therefore, $\xi\le f(\xi)$ by minimality of $\xi$, which is a contradiction.
\end{proof}
\end{lemma}
Using Lemma \ref{TechnicalOrdinalLemma}, we see that $\beta_A,\beta_B\le\beta$; in particular, they are both less than $\alpha$, so we get induced sequences $s\vert_A$ of length $\beta_A$ and $s\vert_B$ of length $\beta_B$. We define
\[
P_A\|P_B = \left\{s\in (M_A\cprd M_B)^{*<\alpha}\suchthat s\vert_A\in P_A,\;s\vert_B\in P_B\right\}
\]
We may now define functions $\zeta_{A\tensor B},\zeta_{A\parr B},\zeta_{A\implies B}\from P_A\|P_B\to\OP$ as before:
\begin{IEEEeqnarray*}{rCc}
\zeta_{A\tensor B}(s) & = & \zeta_A(s\vert_A)\wedge\zeta_B(s\vert_B)\\
\zeta_{A\parr B}(s) & = & \zeta_A(s\vert_A)\vee\zeta_B(s\vert_B)\\
\zeta_{A\implies B}(s) & = & (\zeta_A(s\vert_A)\Rightarrow \zeta_B(s\vert_B))
\end{IEEEeqnarray*}
Our definitions of the functions $\lambda_{A\tensor B},\lambda_{A\parr B},\lambda_{A\implies B}\from M_A\cprd M_B\to \OP$ are the same as before:
\begin{gather*}
\lambda_{A\tensor B}=\lambda_{A\parr B}=\lambda_A\cprd \lambda_B \\
\lambda_{A\implies B} = (\neg\circ\lambda_A)\cprd \lambda_B
\end{gather*}
Given a set $M$, a prefix-closed set $P\subset M^{*<\alpha}$ and functions $\cmap{\lambda}{M}{\OP},\cmap{\zeta}{P}{\OP}$, we say that a play $s\in P$ is \emph{well-formed with respect to $\lambda,\zeta$} if whenever $ta\prefix s$ (with $a\in M$) we have $\zeta(ta)=\lambda(a)$. We say that $s$ is \emph{alternating with respect to $\zeta$} if whenever $t,ta$ are prefixes of $s$ (where $a\in M$), we have $\zeta(t)=\neg\zeta(ta)$.
As before, we now define:
\begin{IEEEeqnarray*}{rCl}
P_{A\tensor B} & = & \left\{s\in P_A\|P_B \;\middle|\; \mbox{\pbox{\textwidth}{$s$ is alternating with respect to $\zeta_{A\tensor B}$ \\ $s$ is well formed with respect to $\zeta_{A\tensor B},\lambda_{A\tensor B}$}}\right\} \\
P_{A\parr B} & = & \left\{s\in P_A\|P_B \;\middle|\; \mbox{\pbox{\textwidth}{$s$ is alternating with respect to $\zeta_{A\parr B}$ \\ $s$ is well formed with respect to $\zeta_{A\parr B},\lambda_{A\parr B}$}}\right\} \\
P_{A\implies B} & = & \left\{s\in P_A\|P_B \;\middle|\; \mbox{\pbox{\textwidth}{$s$ is alternating with respect to $\zeta_{A\implies B}$ \\ $s$ is well formed with respect to $\zeta_{A\implies B},\lambda_{A\implies B}$}}\right\}
\end{IEEEeqnarray*}
Setting $M_{A\tensor B}=M_{A\parr B}=M_{A\implies B} = M_A\cprd M_B$, and restricting $\zeta_{A\tensor B},\zeta_{A\parr B},\zeta_{A\implies B}$ to $P_{A\tensor B},P_{A\parr B},P_{A\implies B}$ respectively, we arrive at our definitions of the connectives:
\begin{IEEEeqnarray*}{cCc}
A\tensor B & = & (M_{A\tensor B}, \lambda_{A\tensor B}, \zeta_{A\tensor B}, P_{A\tensor B})\\
A\parr B & = & (M_{A\parr B}, \lambda_{A\parr B}, \zeta_{A\parr B}, P_{A\parr B})\\
A\implies B & = & (M_{A\implies B}, \lambda_{A\implies B}, \zeta_{A\implies B}, P_{A\implies B})
\end{IEEEeqnarray*}
\begin{proposition}
\label{TransTensorWellFormed}
$A\tensor B$, $A\parr B$ and $A\implies B$ are well formed games. Moreover, $P_{A\tensor B}$ is the largest subset $X\subset P_A\|P_B$ such that $(M_{A\tensor B}, \lambda_{A\tensor B}, \zeta_{A\tensor B}, X)$ is a well formed game and similarly for the connectives $\parr$ and $\implies$.
\begin{proof}
As before, we prove this proposition for $A\tensor B$ and the other two cases are entirely similar. Alternatively, observe that $A\parr B=\neggame{(\neggame A \tensor \neggame B)}$ and that $A\implies B = \neggame A \parr B$.
Examining the definitions of well formed and alternating sequences, it is immediate that $P_{A\tensor B}$ is prefix-closed. Moreover, every sequence contained in $P_{A\tensor B}$ is well formed, so the well-formedness condition holds. If $s,sa\in P_{A\tensor B}$ then $s,sa$ are both prefixes of $sa$ and so we must have $\zeta_{A\tensor B}(s)=\neg\zeta_{A\tensor B}(sa)$, since $sa$ is alternating.
To show that the limit condition holds, suppose that $\mu<\alpha$ is a limit ordinal and that $s\in(M_A)^\mu$ is a sequence of length $\mu$ such that $t\in P_{A\tensor B}$ for every proper prefix $t\pprefix s$. For each $t\pprefix s$, then, we must have $t\vert_A\in P_A$ and $t\vert_B\in P_B$.
We claim that $s\vert_A\in P_A$. Indeed, if $\length(s\vert_A)$ is a successor ordinal, then $s\vert_A$ has a last move $a$. Let $t$ be the prefix of $s$ consisting of all moves up to and including that last move. Then $\length(t)$ is a successor ordinal, so $t$ must be a \emph{proper} prefix of $s$, and therefore $s\vert_A=t\vert_A\in P_A$. If instead $\length(s\vert_A)$ is a limit ordinal, then for every proper prefix $u\pprefix s\vert_A$, we have some $t\prefix s$ with $t\vert_A=u$ (take $t$ to be the prefix consisting of all moves in $s$ that occur in $u$ or that occur before some move in $u$). Since $s\vert_A\ne u$, $t$ must be a \emph{proper} prefix of $s$, and so $t\vert_A\in P_A$. Since $u$ is arbitrary, the limit condition for $A$ tells us that $s\vert_A\in P_A$.
By an identical argument, $s\vert_B\in P_B$, and so $s\in P_A\|P_B$. We claim that $s$ is alternating with respect to $\zeta_{A\tensor B}$. Indeed, if $t,ta\prefix s$, then $t,ta$ must both be \emph{proper} prefixes of $s$, since $\length(s)$ is a limit ordinal and $\length(ta)$ is a successor ordinal. So $ta$ is alternating and therefore $\zeta_{A\tensor B}(t)=\neg\zeta_{A\tensor B}(ta)$.
Lastly, we show that $s$ is well formed with respect to $\lambda_{A\tensor B},\zeta_{A\tensor B}$. Suppose $ta\prefix s$. Once again, $ta$ must be a \emph{proper} prefix of $s$, since it clearly has a different length. Therefore, $ta$ is well formed and we have $\zeta_{A\tensor B}(ta)=\lambda_{A\tensor B}(a)$. Therefore, $s\in P_{A\tensor B}$.
For the second part of the proposition, suppose that $V\subset P_A\|P_B$ is prefix closed and satisfies the alternating condition with respect to $\zeta_{A\tensor B}$ and that if $sa\in V$ then $\zeta_{A\tensor B}(sa)=\lambda_{A\tensor B}(a)$. We want to show that $V\subset P_{A\tensor B}$. Indeed, if $s\in V$ and $ta\prefix s$, then $ta\in V$ (since $V$ is prefix closed) and therefore $\zeta_{A\tensor B}(ta)=\lambda_{A\tensor B}(a)=\neg\zeta_{A\tensor B}(s)$.
\end{proof}
\end{proposition}
As before, we want to show that if player $O$ switches games in the tensor product and that player $P$ switches games in the par and linear implication. Recalling that if $A,B$ are $\alpha$-games then $A\parr B = \neggame{(\neggame A\tensor \neggame B)}$ and that $A\implies B = \neggame A\parr B$, we only need to prove this for the tensor product:
\begin{proposition}
\label{TransWhoSwitchesGames}
Let $A,B$ be $\alpha$-games. Suppose that $s\in P_{A\implies B}$, $a\in M_A$ and $b\in M_B$. Then:
i) If $sab\in P_{A\tensor B}$ then $\lambda_{A\tensor B}(b) = O$
ii) If $sba\in P_{A\tensor B}$ then $\lambda_{A\tensor B}(a) = O$.
\begin{proof}
Unchanged from the finitary case - see Proposition \ref{WhoSwitchesGames}
\end{proof}
\end{proposition}
\subsection{Categories of transfinite games and strategies}
We want to build a category $\G(\alpha)$ whose objects are games and whose morphisms are strategies on the linear implication of two games. Given $\alpha$-games $A$, $B$ and $C$ and strategies $\sigma$ for $A\implies B$ and $\tau$ for $B\implies C$, we define
\begin{gather*}
\sigma\|\tau = \left\{\s\in (M_A\cprd M_B\cprd M_C)^{*<\alpha\#\alpha}\suchthat \s\vert_{A,B}\in\sigma,\;\s\vert_{B,C}\in\tau\right\}\\
\comp\tau\sigma = \{\s\vert_{A,C}\suchthat \s\in\sigma\|\tau\}
\end{gather*}
Here, $\alpha\#\alpha$ is intended to be an upper bound and indeed any sufficiently large ordinal will do without changing the elements of $\sigma\|\tau$. The symbol $\#$ denotes the so-called \emph{natural addition} on ordinals: if $\beta,\gamma$ are ordinals then the order type of $\beta\#\gamma$ is the largest well-ordering on the disjoint union $\beta\sqcup\gamma$ such that the restrictions to $\beta$ and $\gamma$ give the original orderings\footnote{Alternatively, $\#$ represents the coproduct in the category of well-ordered sets and order-preserving maps.}. For example, we have $\omega\#1=\omega+1$, but $1\#\omega=\omega+1$ as well (and \emph{not} $1+\omega$, which equals $\omega$). In any case, this is not an important point - we want to allow the sequences $\s$ to be as long as we need to be, but we need to introduce some bound to make it clear that the resulting collection of plays is a small set. $\alpha\#\alpha$ turns out to be a suitable upper bound.
For the functor $\cmap{\F}{\G(\alpha)}{\Rel}$, we shall want to show that
\[
\comp\tau\sigma = \left\{s\in P_{A\implies C}\suchthat\exists t\in\sigma,u\in\tau\esuchthat s\vert_A=t\vert_A,\;t\vert_B=u\vert_B,\;s\vert_C=u\vert_C\right\}
\]
To show this, we will want to show that for any such $s,t,u$ there is some interleaving $\s$ such that $\s\vert_{A,C}=s$, $\s\vert_{A,B}=t$ and $\s\vert_{B,C}=u$. Even though $s$, $t$ and $u$ all have length less than $\alpha$, $\s$ might have longer length; it will, however, have length less than $\alpha\#\alpha$.
It is not immediately clear from the definition that the elements of $\comp\tau\sigma$ have length less than $\alpha$, and in fact it turns out that this is not true. For this reason, we will eventually need to place restrictions on the ordinal $\alpha$ to make sure that $\comp\tau\sigma$ is indeed a valid strategy for $A\implies C$.
\subsubsection{Completely negative and almost completely negative $\alpha$-games}
Recall that when defining a finitary game $A=(M_A,\lambda_A,\zeta_A,P_A)$, the function $\zeta_A$ was entirely determined by two pieces of information: the function $\lambda_A$ (to determine $\zeta_A(sa)$ for any non-empty play $sa$) and the designation of the game as either \emph{positive} or \emph{negative} (to determine $\zeta_A(\emptyplay)$). In the $\omega+1$ case, we also needed to specify a function $\cmap{W_A}{P_A^\infty}{\OP}$ to get the sign of each $\omega$-play. The general case is similar - once we have specified $\zeta_A(s)$ for any play $s$ whose length is a limit ordinal, the values of $\zeta_A(t)$ for $t$ a play over a successor ordinal are determined by the alternating condition (alternatively, they are determined automatically by well-formedness).
When building a category of games, we restricted our attention to the negative games. This is necessary in order to avoid technical complications\footnote{The category in \cite{abramskyjagadeesangames} has both positive and negative games as objects, together with games where either player may start, but it does not admit a faithful functor into $\Rel$ in the way that we want.}. Of course, these technical complications carry over to the transfinite case, so we need a new negativity condition for our $\alpha$-games.
\begin{definition}
Let $A$ be an $\alpha$-game. We say that $A$ is \emph{completely negative} if $\zeta_A(s)=P$ whenever $s\in P_A$ and $\length(s)$ is a limit ordinal.
\end{definition}
Since the only limiting play in the finitary case is the empty play, we see that the completely negative $\omega$-games are precisely the negative games. But now consider the $\omega+1$ case. If $A$ is a win-game, then $A$ is completely negative only if it is negative and all infinite plays are $P$-winning. Recall that we \emph{were} able to build a category of negative win-games without making the requirement that player $P$ wins all infinite plays, so if we insist on complete negativity then we lose much of the richness of our category for no technical benefit. We would be better off with some weaker condition.
\begin{definition}
Let $\alpha$ be an ordinal. If $\alpha=\alpha'+1$ and $A$ is an $\alpha$-game, we say that $s\in P_A$ is \emph{length maximal} if it has the largest possible length - i.e., if $\length(s)=\alpha'$.
If $\alpha$ is a limit ordinal then there are no length maximal plays in $\alpha$-games.
Let $A$ be an $\alpha$-game. We say that $A$ is \emph{almost completely negative} (\emph{acn}) if $\zeta_A(s)=P$ whenever $\length(s)$ is a limit ordinal \emph{and $s$ is not length-maximal}.
\end{definition}
Note that for most ordinals $\alpha$ there is no difference between being completely negative and being acn. The only ordinals for which there is a difference are those of the form $\mu+1$, where $\mu$ is a limit ordinal.
Our category will have acn $\alpha$-games as objects and the morphisms from $A$ to $B$ will be partial strategies for $A\implies B$. Later we will consider the total strategy case.
\begin{proposition}
\label{TensorIsAcn}
Let $A,B$ be acn $\alpha$-games. Then $A\tensor B$ is acn
\begin{proof}
Let $s\in P_{A\tensor B}$ be a limiting play that is not length maximal. Then at least one of $s\vert_A$ and $s\vert_B$ must be a limiting play (otherwise they both have last moves and the later of the two will be the last move in $s$). Suppose that $s\vert_A$ and $s\vert_B$ are both limiting plays. By Lemma \ref{TechnicalOrdinalLemma}, $\length(s\vert_A),\length(s\vert_B)\le\length(s)$, so neither $s\vert_A$ nor $s\vert_B$ is length maximal, since $s$ is not. Therefore, $\zeta_A(s\vert_A)=\zeta_B(s\vert_B)=P$, since $A$ and $B$ are acn. Therefore, $\zeta_{A\tensor B}(s)=P$, since $P\wedge P=P$.
Suppose instead that $s\vert_A$ is a successor play. Then $s\vert_B$ is a limiting play. Since $s$ is a limiting play, there must be moves from $B$ in $s$ that occur after the last move in $s\vert_A$. Therefore, $\zeta_A(s\vert_A)=P$ by Proposition \ref{TransWhoSwitchesGames}. Now $s\vert_B$ is a limiting play and not length maximal, so $\zeta_B(s\vert_B)=P$ and therefore $\zeta_{A\tensor B}(s)=P$. The case where $s\vert_B$ is a successor play and $s\vert_A$ is a limiting play is similar.
\end{proof}
\end{proposition}
Note, however, that $A\implies B$ is not necessarily acn, even if $A,B$ are completely negative. This is a direct consequence of the extension of the definitions to ordinals greater than $\omega$. For negative $\omega$-games $A,B$, the only limiting play is the empty play and its components are both necessarily empty, so they are also limiting plays. Therefore, $A\implies B$ is a negative game, by the equation $(P\Rightarrow P) = P$.
However, when we move beyond $\omega$, it is no longer true that the components of a limiting play are always limiting plays, so we cannot be sure that they are $P$-positions, and this means that $A\implies B$ is not necessarily completely negative, even if $A,B$ are. As an example, we take the completely negative game
\[
\bot=\{\{*\},\{*\mapsto O\},\{\emptyplay\mapsto P,\;*\mapsto O\}, \{\emptyplay, *\}\}
\]
which has a single $O$-move with no $P$-replies. If $A$ is an acn $\alpha$-game, then $A\implies\bot$ is the same as the game $\neggame A$, but with the extra $O$-move $*$ attached to the start. For $\omega$-games, this extra $O$-move means that $A\implies\bot$ is negative if $A$ is negative, but if $A$ contains plays of length greater than $\omega$ then $A\implies\bot$ is not completely negative if $A$ is completely negative: indeed, if $s\in P_A$ is a non-empty limiting play, then we have
\[
\zeta_{A\implies\bot}(*s)=(\zeta_A(s)\Rightarrow\zeta_\bot(*))=(P\Rightarrow O) = O
\]
But $*s$ is a limiting play, so $A\implies\bot$ is not completely negative.
This will pose a problem for our categorical semantics. We want to build a category where the objects are acn $\alpha$-games and where the morphisms from a game $A$ to a game $B$ are strategies for $A\implies B$. We also want the category to be monoidal closed, which means that we would really like $A\implies B$ to be an object of the category. As we have just seen, this is not the case in general.