-
Notifications
You must be signed in to change notification settings - Fork 1
/
isogeny_classes.sage
4004 lines (3645 loc) · 163 KB
/
isogeny_classes.sage
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
r"""
This file contains classes for computing isogeny classes of Abelian varieties over finite fields.
The computation rests on the Honda-Tate theorem, which gives a bijection between isogeny classes
of abelian varieties of dimension g over F_q and Weil q-polynomials of degree 2g satisfying certain
conditions. Recall that f is a Weil q-polynomial if all of the complex roots of f have absolute value
equal to the square root of q. For more details on the conditions, see the `simplepow_brauer_data`
attribute.
The computation is organized into stages. Each stage adds a number of attributes, which are then
loaded by later stages.
- Enumerate simple isogeny classes using the `WeilPolynomials` iterator over Weil q-polynomials
- Construct all isogeny classes as products of simple ones
- Compute base changes from F_q to F_{q^r}, filling in primitive models, twists and endomorphism algebras
- Combine data from external sources (number of isomorphism classes, number of Jacobians, etc)
- Collate into a single file for uploading to Postgres
Each stage is represented by a `Stage` instance, which produces a list of `Tasks` (usually one
for each g and q). A task can depend on previous tasks being completed, and there is a
`Controller` object that starts tasks running once their prerequisites have been met.
Tasks usually write their output to a file, and create an additional file to signal completion.
The sequence of stages to be executed is controlled by a configuration file `config.ini`.
The current implementation does not optimize for minimizing disk space: data may be duplicated
in the output of multiple stages (for simplicity of the computation framework). If this becomes
problematic at any point, a cleanup stage could be added.
The actual computation of the data associated to a Weil q-polynomial is done by an `IsogenyClass`
instance. Attributes of an isogeny class that correspond to columns in the database
are decorated with a type, which specifies how they are loaded from and saved to a file.
The file format used is precisely that required by the `copy_from` and `reload` methods
of the `PostgresTable` class in the LMFDB. The output of this script is thus easily loaded
into the LMFDB. Note that we do not guard against malicious files, since the assumption is
that files are produced by these scripts.
AUTHORS:
- (2016-05-11) Taylor Dupuy, Kiran S. Kedlaya, David Roe, Christelle Vincent
- (2019-02-02) David Roe, Everett Howe, added more tests for principal polarizations and Jacobians
- (2019) David Roe, Major refactoring and documentation
EXAMPLES:
Currently, the lmfdb can only be imported from the lmfdb folder, and then this file must be attached
from the abvar-fq folder. The actual computation must be done from the lmfdb folder,
since new instances of the `PostgresDatabase` object must be created.
The following example assumes that the lmfdb and abvar-fq repositories are installed side-by-side::
sage: cd lmfdb
sage: from lmfdb import db
sage: %attach ../abvar-fq/isogeny_classes.sage
sage: IC = IsogenyClasses()
sage: IC.run_parallel(4)
The stages to be executed are controlled by the configuration file `config.ini`.
In addition to containing global configuration options, it specifies which columns should be stored by each stage.
An example configuration is included in this repository, and we describe the different sections here.
[extent]
This section controls the range of g and q to be computed. For any (g, q) included, all
(g',q) with g' < g and all (g, q') with q'|q must also be included.
- `g1`, `g2`, etc -- give a list of `q` to be included for this dimension.
[plan]
This section controls which stages are actually run. By editing this section you can control
which stages are computed.
- `stages` -- a list of stages, which will be run in order
[dirs]
This section describes where on the file system results are stored.
- `base` -- a base folder
- `subdirs` -- a list of subfolders to be created. If the default values are kept,
the subfolders play the following roles:
- `basic` -- holds the results of the GenerateSimple and GenerateAll stages
- `endalg` -- holds the results of the Basechange stage
- `external` -- location for adding external data files for input to the Combine stage
- `complete` -- holds the results of the Combine stage
- `logs` -- holds logs for recording progress of computational stages
[logging]
This section contains configuration options for logging
- `logfile` -- a template for where logs are stored (e.g. logs/{name}{data}.log)
- `logfrequency` -- the frequency with which a status report will be posted to the log file
- `logheader` -- a format string for a message to be printed when a task starts or finishes
[StageGenerateSimple], [StageGenerateAll], [StageBasechange],...
Each stage defined in this file has a corresponding section in the configuration file, specifying
input and output parameters
- `in0`, `in1`, ... -- a list of input files, usually with formatting slots for g and q
- `out0`, `out1`, ... -- a list of output files, usually with formatting slots for g and q
- `data0`, `data1`, ... -- for each output file, a list of attributes to be saved to that file
Note that filenames should contain a .txt suffix, which is replaced with `.done` to indicate that
output to that file has completed.
Fields we want to populate with an example
label: "2.9.ab_d"
polynomial: [1,-1,3,-9,81]
angle_numbers (doubles): [0.23756..., 0.69210...]
number_field: "4.0.213413.1"
p-rank: 1
slopes: [0,1/2,1/2,1]
A_counts: [75, 7125]
C_counts: [9, 87]
known_jacobian (0,1,-1): 1
decomposition: ["9.2.-1._3"]
pricipally_polarizable (0,1,-1): 1
Brauer Invariants: inv_v( End(A_{FFbar_q})_{QQ} )=(v(\pi)/v(q))*[QQ(pi)_{v}: QQ(pi): v\vert p place of QQ(\pi)], these are stored as elements of QQ.
Primitive models:
TODO:
Add links from ec, ecnf, g2c
Splitting field
Add p, geometric_simple_multiplicities, splitting field
"""
######################################################################################################
from sage.databases.cremona import cremona_letter_code, class_to_int # for the make_label function
from sage.misc.lazy_attribute import lazy_attribute
from sage.misc.cachefunc import cached_method, cached_function
from sage.misc.mrange import cartesian_product_iterator
from sage.all import Polynomial
from sage.rings.polynomial.weil.weil_polynomials import WeilPolynomials
from sage.repl.attach import attached_files
import json, os, re, sys, time, shutil
import psutil
import heapq
opj, ope = os.path.join, os.path.exists
from copy import copy
from collections import defaultdict, Counter
from multiprocessing import Process, Queue
from datetime import datetime
from itertools import combinations, combinations_with_replacement, zip_longest, islice
from configparser import ConfigParser, NoOptionError
from lmfdb.backend.database import PostgresDatabase
from psycopg2.sql import SQL, Identifier, Literal
from cypari2.handle_error import PariError
int_re = re.compile(r'-?\d+')
try:
# Add the location of weil_polynomials.pyx to the load path
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
except NameError:
pass
# We need a different connection to postgres for each process.
@cached_function
def get_db():
return PostgresDatabase()
######################################################################################################
# Timeout utility, adapted from http://code.activestate.com/recipes/577853-timeout-decorator-with-multiprocessing/
# Timeouts are not yet used in the remainder of this file
class TimeoutException(Exception):
pass
class RunableProcessing(Process):
def __init__(self, func, *args, **kwargs):
self.queue = Queue(maxsize=1)
args = (func,) + args
Process.__init__(self, target=self.run_func, args=args, kwargs=kwargs)
def run_func(self, func, *args, **kwargs):
try:
result = func(*args, **kwargs)
self.queue.put((True, result))
except Exception as e:
self.queue.put((False, e))
def done(self):
return self.queue.full()
def result(self):
return self.queue.get()
def timeout(seconds, force_kill=True):
def wrapper(function):
def inner(*args, **kwargs):
now = time.time()
proc = RunableProcessing(function, *args, **kwargs)
proc.start()
proc.join(seconds)
if proc.is_alive():
if force_kill:
proc.terminate()
runtime = time.time() - now
raise TimeoutException('timed out after {:.2f} seconds'.format(runtime))
assert proc.done()
success, result = proc.result()
if success:
return result
else:
raise result
return inner
return wrapper
######################################################################################################
# Some combinatorial utilities for enumerating non-simple isogeny classes from simple ones
class CombosWithReplacement(object):
"""
We augment itertools' combinations_with_replacement with a length
"""
def __init__(self, L, m):
self.L = L
self.m = m
def __len__(self):
return binomial(len(self.L)+self.m-1,self.m)
def __iter__(self):
return combinations_with_replacement(self.L, self.m)
# These functions are used for computing base changes
@cached_function
def symfunc(i, r):
"""
Returns a symmetric function expressing the ith coefficient of the rth base change map.
Namely, if f(x) = 1 + ... + a_n x^n and g(x) = 1 + ... + b_n x^n is the polynomial
whose roots are the rth powers of the roots of f, then b_i = symfunc(i, r)(1, a1, ..., a_n)
EXAMPLES:
sage: symfunc(1,2)
e[1, 1] - 2*e[2]
sage: symfunc(2,2)
e[2, 2] - 2*e[3, 1] + 2*e[4]
"""
Sym = SymmetricFunctions(QQ)
p = Sym.powersum()
if i == 0:
return p.one()
e = Sym.elementary()
return e(p(e[i]).map_support(lambda A: Partition([r*c for c in list(A)])))
@cached_function
def basechange_transform(g, r):
"""
Returns a transform that takes in a `q`-Weil L-polynomials of degree `2g` (constant coefficient 1)
and returns a `q^r`-Weil L-polynomial of degree `2g` whose roots are the `r`th powers of the roots
of the input.
"""
f = [symfunc(i, r) for i in range(g+1)]
coeffs = [b.coefficients() for b in f]
exps = [[{a: list(elem).count(a) for a in set(elem)} for elem in sorted(b.support()) if list(elem) and max(elem) <= 2*g] for b in f]
def bc(Lpoly, q):
# Assume that Lpoly has constant coefficient 1.
R = Lpoly.parent()
signed_coeffs = [(-1)^j * c for j, c in enumerate(Lpoly)]
bc_coeffs = [1]
for i in range(1, g+1):
bc_coeffs.append((-1)^i*sum(c*prod(signed_coeffs[j]^e for j,e in D.items()) for c, D in zip(coeffs[i], exps[i])))
for i in range(1,g+1):
# a_{g+i} = q^(ri) * a_{g-i}
bc_coeffs.append(q^(r*i) * bc_coeffs[g-i])
return R(bc_coeffs)
return bc
def tensor_charpoly(f, g):
r"""
INPUT:
- ``f`` -- the characteristic polynomial of a linear transformation
- ``g`` -- the characteristic polynomial of a linear transformation
OUTPUT:
The characteristic polynomial of the tensor product of the linear transformations
EXAMPLES::
sage: x = PolynomialRing(ZZ,"x").gen();
sage: tensor_charpoly((x - 3) * (x + 2), (x - 7) * (x + 5))
(x - 21) * (x - 10) * (x + 14) * (x + 15)
"""
R.<y> = g.parent()[]
A = f(y)
B = R(g.homogenize(y))
return B.resultant(A)
def base_change(Lpoly, r, algorithm=None, g = None, q = None, prec=53):
"""
Returns a polynomial whose roots are the `r`th powers of the roots of the input polynomial.
INPUT:
- ``Lpoly`` -- a Weil `q`-polynomial of degree `2g`
- ``r`` -- a positive integer
- ``algorithm`` -- either "sym", "approx" or "linalg".
- If "sym", an algorithm based on symmetric polynomials is used.
This allows for caching the transformation and fast evaluation, but has a rapidly growing
memory footprint when `r` gets larger than about 10.
- If "approx", roots are found over a complex field, raised to the `r`th power
and the base change polynomial is reconstructed. For large `r` you must give an
appropriate precision manually.
- If "linalg", the base change is computed by evaluating at roots of unity times
the generator of the polynomial ring. This is slower than "sym" but does not have
the memory issues, and no precision analysis is needed since the arithmetic is exact.
If no algorithm is given, `r` is factored and "sym" and "linalg" used recursively, with
"linalg" used for primes larger than 7
Returns a transform that takes in a `q`-Weil L-polynomials of degree `2g` (constant coefficient 1)
and returns a `q^r`-Weil L-polynomial of degree `2g` whose roots are the `r`th powers of the roots
of the input.
"""
if g is None:
g = Lpoly.degree()
assert g % 2 == 0
g = g // 2
if q is None:
q = Lpoly.leading_coefficient().nth_root(g)
if algorithm is None:
# It's better to divide by larger numbers if possible, as long as we don't get so big that
# computing the basechange_transform starts to cause memory issues
for ell in range(7,1,-1):
while r % ell == 0:
Lpoly = base_change(Lpoly, ell, algorithm='sym', g=g, q=q)
q = q^ell
r = r//ell
for ell, e in ZZ(r).factor():
for i in range(e):
Lpoly = base_change(Lpoly, ell, algorithm='resultant', g=g, q=q)
return Lpoly
elif algorithm == 'approx':
C = ComplexField(prec)
R = RealField(prec)
LC = Lpoly.change_ring(C)
x = LC.parent().gen()
approx = prod((1 - x/alpha^r)^e for alpha, e in LC.roots())
approx_coeffs = approx.list()
acceptable_error = R(2)^-(prec//2)
exact_coeffs = [c.real().round() for c in approx_coeffs]
actual_error = max(abs(ap - ex) for ap, ex in zip(approx_coeffs, exact_coeffs))
if actual_error > acceptable_error:
raise RuntimeError(actual_error)
return Lpoly.parent()(exact_coeffs)
elif algorithm == 'resultant':
# This seems to be slightly faster than the following 'linalg' method, with a larger difference for smaller g
R = Lpoly.parent()
T = R.gen()
S.<u> = R[]
return Lpoly(u).resultant(u^r - T)
elif algorithm == 'linalg':
# From https://github.com/edgarcosta/endomorphisms/blob/master/endomorphisms/OverFiniteField/utils.py
K.<zeta> = CyclotomicField(r)
R = Lpoly.parent()
f = Lpoly.change_ring(K)
T = f.parent().gen()
poly = 1
for i in range(r):
poly *= f(zeta^i * T)
L = poly.list()
newf = [None]*(1 + f.degree())
for i, ci in enumerate(L):
if i % r != 0:
if ci != 0:
raise RuntimeError("i = %s, ci = %s" % (i, ci))
else:
newf[i/r] = ZZ(ci)
return R(newf)
else:
return basechange_transform(g, r)(Lpoly, q)
def hom_degrees(f, g, q):
x = f.parent().gen()
tcp = tensor_charpoly(f, g)(x/q)
# We only care about the part of tcp which is a product of cyclotomic polynomials
tcp = tcp.cyclotomic_part()
# There seems to be a difficult-to-diagnose bug in factor, which causes
# PariError: bug in gerepile, significant pointers lost, please report
try:
factors = tcp.factor()
except PariError:
print("Pari error in res(%s, %s)" % (f, g))
# Try to work around the bug using squarefree decomposition
sqfree = tcp.squarefree_decomposition()
factors = []
for fac, e in sqfree:
for subfac, sube in fac.factor():
factors.append((subfac, e*sube))
degrees = sorted(Cyclotomic().order(factor) for (factor, power) in factors)
if degrees[0] == 0:
# Every factor should by cyclotomic
raise RuntimeError
return degrees
def minimal_twists(IC1, IC2, min_degs=None, max_degs=None):
"""
INPUT:
- ``IC1``, ``IC2`` -- two IsogenyClass objects with the same g and q
- ``min_degs`` -- (optional) a list of degrees so that every minimal twist degree
is a multiple of at least one, and a divisor of their lcm.
- ``max_degs`` -- (optional) a list of degrees where the isogeny classes become isogenous
(every deg will be a divisor of at least one of these)
OUTPUT:
- A list of pairs ``(deg, bc)`` giving minimal twists (for the divisibility poset),
where ``deg`` is a degree (a divisor of the lcm of ``degs``) and ``bc`` is the common
base change to that degree.
"""
q = IC1.q
g = IC1.g
assert q == IC2.q and g == IC2.g
if min_degs is None and max_degs is None:
min_degs = hom_degrees(IC1.Lpoly, IC2.Lpoly, q)
if not min_degs:
return []
if max_degs is None:
max_degs = [lcm(min_degs)]
primes = sorted(set(sum([D.prime_divisors() for D in max_degs], [])))
if min_degs is None:
min_degs = primes
found_degs = []
def _minimal_twists(cur_level):
"""
INPUT:
- ``cur_level`` -- a list of integers ``d`` so that
``d`` divides at least one of ``max_degs``,
and ``d`` is not divisible by any of ``found_degs``
OUTPUT:
None, but adds the appropriate degrees to ``found_degs`` and recursively calls the next level.
"""
if not cur_level:
return
for d in cur_level:
if IC1.basechange[d] == IC2.basechange[d]:
found_degs.append(d)
next_level = []
next_seen = set()
for d in cur_level:
for p in primes:
dp = d*p
if dp in next_seen:
continue
next_seen.add(dp)
if any(found.divides(dp) for found in found_degs):
continue
if not any(dp.divides(M) for M in max_degs):
continue
next_level.append(dp)
next_level.sort()
_minimal_twists(next_level)
_minimal_twists(min_degs)
found_degs.sort()
return [(d, IsogenyClass(Lpoly=IC1.basechange[d])) for d in found_degs]
def find_twists(ICs):
"""
Split up a list of isogeny classes that have similar geometric data
(e.g. g, q, slopes and geometric endomorphism algebra),
into geometric isogeny classes, giving the minimal degrees required to realize isogenies.
INPUT:
- ``ICs`` -- a list of IsogenyClass objects (all with the same g and q)
OUTPUT:
- None, but we set twists on each isogeny class to be a list of triples
``(twist, bc, deg)``, where ``twist`` is another isogeny class over the same
base field, ``deg`` is a minimal degree over which the key and ``twist`` become
isogenous (note that the same twist might become isogenous for degree 2 and 3;
we list minimal degrees in the divisibility poset), and ``bc`` is the common base change.
"""
clusters = []
seen = set()
twists = {}
q = ICs[0].q
for IC1 in ICs:
# set up for appending at the end
IC1.twists = []
if IC1.label in seen:
continue
seen.add(IC1.label)
clusters.append([IC1])
for IC2 in ICs:
if IC2.label in seen:
continue
mtwists = minimal_twists(IC1, IC2)
if mtwists:
seen.add(IC2.label)
clusters[-1].append(IC2)
twists[IC1, IC2] = mtwists
# We've now broken up the input into clusters; we fill out
# the minimal twist degrees within each cluster
for C in clusters:
IC1 = C[0]
for IC2, IC3 in combinations(C[1:], 2):
deg2 = [d for d,bc in twists[IC1, IC2]]
deg3 = [d for d,bc in twists[IC1, IC3]]
max_degs = []
for a, b in cartesian_product_iterator([deg2, deg3]):
if not any(d.divides(a*b) for d in max_degs):
max_degs.append(a*b)
twists[IC2, IC3] = minimal_twists(IC2, IC3, max_degs=max_degs)
# Now construct the return value
for IC1, IC2 in combinations(ICs, 2):
IC1.twists.extend([(IC2, base_change, d) for (d, base_change) in twists.get((IC1, IC2), [])])
IC2.twists.extend([(IC1, base_change, d) for (d, base_change) in twists.get((IC1, IC2), [])])
for IC in ICs:
IC.twists.sort(key = lambda x: (x[2], x[0].Ppoly))
IC.twists = [(JC.label, BC.label, d) for (JC, BC, d) in IC.twists]
class Cyclotomic(UniqueRepresentation):
"""
An object for determining which polynomials are cyclotomic, and their order if they are
"""
# Convention: phi(m) = n
def __init__(self):
self.polynomials = {}
self.poly_mbound = 1
self.poly_nbound = 0
self.orders = defaultdict(list)
self.order_mbound = 1
self.order_nbound = 0
def order(self, f):
"""
If f is a cyclotomic polynomial Phi_m, return m;
otherwise, return 0
"""
if f.degree() >= self.poly_nbound:
self.compute(f.degree(), poly=True)
return self.polynomials.get(f, 0)
def inverse_phi(self, n):
"""
Return the list of `m` so that `Phi(m) = n`
"""
if n >= self.order_nbound:
self.compute(n, poly=False)
return self.orders[n]
@staticmethod
def _canonicalize_elem_divisors(L):
"""
Return the tuple of integers so that each divides the next and
they yield the same abelian group as ``L``
"""
ed = diagonal_matrix(ZZ, L).elementary_divisors()
return tuple(d for d in ed if d != 1)
@classmethod
def _Um_structure(cls, m):
v2, u2 = ZZ(m).val_unit(2)
if v2 < 2:
L = []
elif v2 == 2:
L = [2]
else:
L = [2,2^(v2-2)]
L.extend([(p-1)*p^(e-1) for p,e in factor(u2)])
return cls._canonicalize_elem_divisors(L)
def inverse_structure(self, elem_divisors):
"""
Return the list of `m` so that `(Z/m)^x` has the given abelian invariants
"""
n = prod(elem_divisors)
elem_divisors = self._canonicalize_elem_divisors(elem_divisors)
return [m for m in self.inverse_phi(n) if self._Um_structure(m) == elem_divisors]
def get_mbound(self, nbound):
"""
Return an integer `mbound` so that if `n <= nbound` and `n = Phi(m)` then `m < mbound`
"""
# https://math.stackexchange.com/questions/265397/inversion-of-the-euler-totient-function
steps = 0
n = nbound
while n != 1:
n = euler_phi(n)
steps += 1
return 2 * 3^steps + 1
def compute(self, nbound, poly):
"""
Adds cyclotomic polynomials to the cache, including all of degree less than `nbound`.
"""
mbound = self.get_mbound(nbound)
if poly:
for m in srange(self.poly_mbound, mbound):
f = cyclotomic_polynomial(m)
self.polynomials[f] = m
if m >= self.order_mbound:
self.orders[f.degree()].append(m)
self.poly_nbound = max(nbound, self.poly_nbound)
self.poly_mbound = max(mbound, self.poly_mbound)
else:
for m in srange(self.order_mbound, mbound):
self.orders[euler_phi(m)].append(m)
self.order_nbound = max(nbound, self.order_nbound)
self.order_mbound = max(mbound, self.order_mbound)
@cached_method
def twist_degrees(self, g, algorithm='magma'):
"""
Return a list of possible degrees
"""
if algorithm == 'magma':
try:
G = magma.CoxeterGroup('"B%d"'%g)
except TypeError: # magma not installed
algorithm = 'gap'
else:
subgps = [magma('%s`subgroup'%(c.name())) for c in G.Subgroups()]
# AbelianQuotientInvariants works for PCGroups but not PermGroups
structures = set(tuple(map(ZZ, H.AbelianQuotient().AbelianInvariants())) for H in subgps)
# May also need to allow quotients
if algorithm == 'gap':
G = WeylGroup(["B", g]).gap()
subgps = G.ConjugacyClassesSubgroups()
structures = set(tuple(map(ZZ, H.Representative().AbelianInvariants())) for H in subgps)
return sorted(set(sum((self.inverse_structure(ED1+ED2) for ED1,ED2 in combinations_with_replacement(structures,2)), [])))
class BasechangeCache(object):
"""
Used for caching calls to base_change.
INPUT:
- ``D`` -- a list of key-value pairs, where keys are positive integers
and values are lists of coefficients for the Lpoly of the basechange
to that degree. 1 must be a key.
"""
def __init__(self, D):
self._D = {}
for n, coeffs in D:
self._D[n] = ZZ['x'](coeffs)
self.g = g = self._D[1].degree()//2
self.q = self._D[1].leading_coefficient().nth_root(g)
def __getitem__(self, n):
if n in self._D:
return self._D[n]
n = ZZ(n)
F = n.factor()
ps = [p for p,e in F]
curcost = (max(ps), sum(e for p,e in F), n)
bestd = ZZ(1)
for d in reversed(n.divisors()):
if d not in self._D or d == 1:
continue
m = n//d
F = [(p, m.valuation(p)) for p in ps]
cost = (max(p for p,e in F if e > 0), sum(e for p,e in F), m)
if cost < curcost:
bestd = d
curcost = cost
d = bestd
m = n//d
g, q = self.g, self.q
for p in reversed(ps):
for i in range(m.valuation(p)):
self._D[d*p] = base_change(self._D[d], p, g=g, q=q^d)
d *= p
return self._D[n]
def save(self):
return [(n, list(self._D[n])) for n in sorted(self._D)]
def seed(self, s, rs):
"""
Compute the base change to all divisors of s, and to all entries in the list rs
"""
s = ZZ(s)
g, q = self.g, self.q
def _seed(n, fac):
if not fac:
return
p0, e0 = fac[0]
q0 = p0^e0
if e0 > 1:
subfac = [(p0, e0-1)] + fac[1:]
else:
subfac = fac[1:]
_seed(n // p0, subfac)
for d in (n // q0).divisors():
d0 = d*q0//p0
self._D[d*q0] = base_change(self._D[d0], p0, g=g, q=q^d0)
_seed(s, s.factor())
# Compute the base change for entries in rs
for r in rs:
bcr = self[r]
# These functions extend cremona codes to negative integers (prepending an a)
def signed_cremona_letter_code(m):
if m >= 0:
return cremona_letter_code(m)
else:
return 'a' + cremona_letter_code(-m)
def signed_class_to_int(code):
if code == 'a':
return ZZ(0)
elif code.startswith('a'):
return -ZZ(class_to_int(code[1:]))
else:
return ZZ(class_to_int(code))
# The following classes support nicely loading and saving
# to disk in a format readable by postgres
class PGType(lazy_attribute):
"""
An extension of a lazy_attribute that decorates an attribute with a postgres type
and provides mechanisms for writing to and loading from a file.
INPUT:
- ``func`` -- a function for computing the attribute
- ``internal`` -- boolean (default False), whether this attribute is an intermediate result
and not stored in the final file.
"""
check=None # bound for integer types
def __init__(self, func=None, internal=False):
if func is None:
self.internal = internal
else:
if not hasattr(self, 'internal'):
self.internal = False
self(func)
def __call__(self, func):
lazy_attribute.__init__(self, func)
return self
def _load(self, x):
"""
Wraps :meth:`load` for the appropriate handling of NULLs.
"""
if x != r'\N':
return self.load(x)
def load(self, x):
"""
Takes a string from a file and returns the appropriate
Sage object.
Should be inverse to :meth:`save`
This default function can be overridden in subclasses.
"""
if int_re.match(x):
return ZZ(x)
elif x.startswith('{'):
return sage_eval(x.replace('{','[').replace('}',']'))
elif x.startswith('['): # support jsonb
return sage_eval(x)
else:
return x
def _save(self, x):
"""
Wraps :meth:`save` for the appropriate handling of NULLs.
"""
if x is None:
return r'\N'
else:
return self.save(x)
def save(self, x, recursing=False):
"""
Takes a Sage object stored in this attribute
and returns a string appropriate to write to a file
that postgres can load.
Should be inverse to :meth:`load`
This default function can be overridden in subclasses.
"""
if isinstance(x, (list, tuple)):
if self.pg_type == 'jsonb':
return '[' + ','.join(PGType.save(self, a) for a in x) + ']'
else:
return '{' + ','.join(PGType.save(self, a) for a in x) + '}'
else:
if self.check and (x >= self.check or x <= -self.check):
raise ValueError("Out of bounds")
if isinstance(x, str):
return '"' + x + '"'
else:
return str(x)
class pg_text(PGType):
pg_type = 'text'
def load(self, x):
return x
def save(self, x):
return x
class pg_smallint(PGType):
check = 2**15-1
pg_type = 'smallint'
class pg_integer(PGType):
check = 2**31-1
pg_type = 'integer'
class pg_bigint(PGType):
check = 2**63-1
pg_type = 'bigint'
class pg_numeric(PGType):
# Currently only used to store large integers, so no decimal handling needed
pg_type = 'numeric'
class pg_smallint_list(PGType):
check = 2**15-1
pg_type = 'smallint[]'
class pg_integer_list(PGType):
check = 2**31-1
pg_type = 'integer[]'
class pg_bigint_list(PGType):
check = 2**63-1
pg_type = 'bigint[]'
class pg_float8_list(PGType):
# We use double precision since that's what postgres returns as the type (and thus what database.py checks for)
pg_type = 'double precision[]'
class pg_text_list(PGType):
pg_type = 'text[]'
class pg_numeric_list(PGType):
pg_type = 'numeric[]'
class pg_boolean(PGType):
pg_type = 'boolean'
def load(self, x):
if x in ['t','1']:
return True
elif x in ['f','0']:
return False
else:
raise RuntimeError
def save(self, x):
if x:
return 't'
else:
return 'f'
class _rational_list(PGType):
"""
A list of rational numbers (or nested such lists), stored as lists of strings.
"""
def load(self, x):
def recursive_QQ(y):
if isinstance(y, str):
return QQ(y)
else:
return list(map(recursive_QQ, y))
x = PGType.load(self, x)
return recursive_QQ(x)
def save(self, x):
def recursive_str(y):
if isinstance(y, list):
return [recursive_str(z) for z in y]
else:
return str(y)
x = recursive_str(x)
return PGType.save(self, x)
class _rational_mults(PGType):
"""
A non-nested list of rational numbers, where multiplicities
are emphasized by appending a letter, e.g. ["1/2A", "1/2B", "2/3A"]
"""
def load(self, x):
x = sage_eval(re.sub(r'[A-Z]', '', x).replace('{','[').replace('}',']')) # Remove letters
return [QQ(y) for y in x]
def save(self, x):
cntr = Counter()
def stringify(x):
res = str(x) + cremona_letter_code(cntr[x]).upper()
cntr[x] += 1
return res
return PGType.save(self, [stringify(y) for y in x])
class pg_rational_list(_rational_list):
pg_type = 'text[]'
class pg_rational_mults(_rational_mults):
pg_type = 'text[]'
class pg_jsonb(PGType):
pg_type = 'jsonb'
class pg_bccache(PGType):
pg_type = 'bccache'
def __init__(self, func=None, internal=True):
PGType.__init__(self, func, internal)
def load(self, x):
return BasechangeCache(sage_eval(x.replace('{','[').replace('}',']')))
def save(self, x):
return PGType.save(self, x.save())
class GenericTask(object):
primeonly = False # if this task should be executed for each prime, rather than each prime power
def __init__(self, stage, g, q, i='', chunksize=None):
self.g, self.q, self.stage, self.i = g, q, stage, i
if chunksize is not None:
self.chunksize = chunksize
self.kwds = {'g':g, 'q':q, 'i':i}
@lazy_attribute
def logheader(self):
return self.stage.controller.logheader.format(g=self.g, q=self.q, i=self.i, name=self.stage.shortname)
@staticmethod
def _done(filename):
return filename.replace('.txt', '.done')
def _logdata(self):
s = ''
if self.g != 0:
s += '_%s' % self.g
if self.q != 0:
s += '_%s' % self.q
return s
@lazy_attribute
def _logfile(self):
stage = self.stage
controller = stage.controller
return controller.logfile_template.format(name=stage.shortname, data=self._logdata())
def ready(self):
return all(ope(self._done(data[-1])) for data in self.input_data)
def done(self):
return all(ope(filename) for filename in self.donefiles)
def log(self, s, abstime=False):
s = str(s).strip()
if abstime:
s += " (%s)\n" % (datetime.utcnow())
else:
s += " (%s)\n" % (datetime.utcnow() - self.t0)
if isinstance(self, GenericAccumulator):
s = "[A] " + s
elif self.i != '':
s = "[%s] " % self.i + s
with open(self._logfile, 'a') as F:
F.write(s)
@lazy_attribute
def input_data(self):
"""
Default behavior can be overridden in subclass
"""
return [(None, filename.format(**self.kwds)) for filename in self.stage.input]
def load(self, *args, **kwds):
if self.i != '' and not kwds.pop('loadall',False):
kwds['start'] = start = self.i * self.chunksize
kwds['stop'] = start + self.chunksize
return self.stage.controller.load(*args, **kwds)
@lazy_attribute
def donefiles(self):
if isinstance(self, GenericBigAccumulator):
return [self._done(accum) for (output, accum, attributes) in self.stage.output if accum is not None]
elif isinstance(self, GenericAccumulator):
kwds = dict(self.kwds)
kwds['i'] = ''
return [self._done(output.format(**kwds)) for (output, accum, attributes) in self.stage.output]
else:
return [self._done(output.format(**self.kwds)) for (output, accum, attributes) in self.stage.output]
def save(self, *sources, **kwds):
stage = self.stage
controller = stage.controller
for (outfile, accum, attributes), source in zip(stage.output, sources):
outfile = outfile.format(**self.kwds)
controller.save(outfile, source, attributes, self.t0)
def _run(self):
self.t0 = datetime.utcnow()
self.run()
self.log("Finished")
def enumerate(self, sequence, start=0, freq=None, header=''):
if freq is None:
freq = self.stage.controller.logfrequency
try:
slen = '/%s' % len(sequence)
except Exception:
slen = ''
for i, item in enumerate(sequence, start):
if i and i%freq == 0:
self.log(header + '%s%s' % (i, slen))
yield item
class GenericSpawner(GenericTask):
"""
Subclasses should implement a ``spawn`` method, which returns a list of tasks.
Note that this method is run in the master process, so it should be fast.
The use of a Spawner stage allows the set of tasks to be executed to depend
on the data computed in earlier stages.
"""
def _run(self):
raise RuntimeError("Spawning tasks should not be run")
@lazy_attribute
def chunksize(self):
try:
return ZZ(self.stage.controller.cfgp.get(self.stage.__class__.__name__, 'chunksize'))
except NoOptionError:
return 10000
@lazy_attribute
def numlines(self):
infile = self.input_data[0][1]
i = 0
with open(infile) as Fin:
for i, line in enumerate(Fin, -2): #We subtract 3 due to header lines
pass
return i
@lazy_attribute
def num_subtasks(self):
num_subtasks = (self.numlines - 1) // self.chunksize + 1