Skip to content

summer_2020_log

Kai Blumberg edited this page Sep 2, 2020 · 61 revisions

05.28

src variables spreadsheet electro conductivity

soil ph soil ec soil moisture

Vulnerability and resilience sheet from src vars:

income level

proportion of population of low income (count and percent)

look at the publication (paper) in other env qualities to get some background

TODO: deal with the ACS data first try to get the variables out of that weird spreadsheet.

https://www.census.gov/quickfacts/fact/table/US/PST045219

I think we should use this r package to get the census data https://github.com/walkerke/tidycensus it can do the api call and process the data with tidyverse.

data folder is in cyverse:

/iplant/home/rwalls/ua-src-data/


meeting with Lilliana

for her project need to:

  1. odk with chebi import

  2. design patterns chemicals in cdno

  3. first template chemicals hierarchy from chebi classes build the hierarchy she needs. Create classes liike CDNO glucose derived form amilopectin.

  4. dosdp for chemical concentrations using envo concentration classes as roots (eg concentration of calcium in env material).

05.29

Meeting with Chris form CSM (Colorado school of mines)

his chemistry data is in this folder: /iplant/home/rwalls/ua-src-data/csm/water_chem/data

in his original MASTER water chem excel sheet there are RA and FA values (unfiltered and filtered) which are in units of mg/l

all values (unless specified) are elemental Fe = elemental iron

for Chris' data the material is freshwater

tidycensus repo

owl_patternizer from Chris, to find template-able axioms from ontologies, to ingest into templates. Pretty cool.

CyVerse Data Store instructions

get data using irods do iinit with input fields first (I think only first time).

iget -K -r /iplant/home/rwalls/ua-src-data/csm/water_chem/data

get the unique field ID's from the csm data:

cut -d "," -f 1 *doc_toc.csv > ../test/doc_toc.txt 
sort -u ../test/doc_toc.txt > ../test/doc_toc_unique.txt

cut -d "," -f 1 *field_data.csv > ../test/field_data.txt 
sort -u ../test/field_data.txt > ../test/field_data_unique.txt

cut -d "," -f 1 *ic.csv > ../test/ic.txt 
sort -u ../test/ic.txt > ../test/ic_unique.txt

cut -d "," -f 1 *icp_aes.csv > ../test/icp_aes.txt 
sort -u ../test/icp_aes.txt > ../test/icp_aes_unique.txt

06.02

From Ramona: Here is how to deal with large ontology files you need for imports: use ODK.

The standard ODK set up will create lines in the makefile that import the source ontologies locally the first time, and then check for new versions whenever you make the imports. It stores those source ontologies in a directory called /src/ontology/mirror. It also adds a gitignore file that tells git not to archive anything in mirror, so the source ontologies are only stored locally.

You can see an example of the lines in the FuTRES ontology makefile at https://github.com/futres/fovt/blob/master/src/ontology/Makefile starting at line 220. You can see the corresponding git ignore file at https://github.com/futres/fovt/blob/master/.gitignore, lines 10 and 11.

06.03

Meeting with Dartmouth folks OBA traits for metazoans

Lili submitted to ICBO https://icbo2020.inf.unibz.it/call-for-papers/

metadata standards

to add to Mark's schema.org list: Darwin Core EML ISO WoRMS ITIS Genomic Standards Consortium NetCDF/ Climate and Forecast Acoustic Backscatter Standards

06.05

meeting notes. keeping BDL.

ENVO editors meeting: http://bit.ly/cjmzoom Rolling Agenda: https://docs.google.com/document/d/1t5tn-YMA0RJl0ZZHegYBBSgWCTMVx3tB0tSVemCQ2GI/edit

06.08

Playing around with ODK

./../ontology-development-kit/seed-via-docker.sh -cC project.yaml

project.yaml

id: demo
title: Demonstration Ontology
github_org: kaiiam
repo: demo
import_group:
  products:
    - id: ro
    - id: bfo
    - id: pato
    - id: envo
    - id: bco

git remote add origin git\@github.com:kaiiam/demo.git

git push -u origin master --force

See https://github.com/kaiiam/demo/blob/master/src/ontology/README-editors.md

It doesn't work with CHEBI get the error Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded

Trying to add new imports

modeling from https://github.com/futres/fovt/blob/master/src/ontology/Makefile added the following to demo.Makefile

./run.sh make all_imports -> didn't add an oba.owl to imports also tried ./run.sh make mirror/oba.owl no results. I don't think this will work as the fovt has IMPORTS = uberon oba bco. Check how ECOCORE is getting CHEBI, they just have it in their IMPORTS line.

Try adding it to DEMO see if that works ./run.sh make all_imports. -> *** No rule to make target 'imports/chebi_import.owl', needed by 'all_imports'. Stop. Add lines to demo.Makefile. Fixed that but it's still giving the same issue.

I noticed in the odk repos example envo config they have the line robot_java_args: '-Xmx8G' which I believe has to do with the java memory assigning 8G which I believe is related to the java.lang.OutOfMemoryError. Tried again with new version of ODK pulled from github (just in case) same issue

ERROR:root:b'Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded\n\tat java.util.Arrays.copyOfRange(Arrays.java:3664)\n\tat java.lang.String.<init>(String.java:207)\n\tat java.lang.StringBuilder.toString(StringBuilder.java:407)\n\tat org.semanticweb.owlapi.rdf.rdfxml.parser.ResourceOrLiteralElement.endElement(StartRDF.java:504)\n\tat org.semanticweb.owlapi.rdf.rdfxml.parser.RDFParser.endElement(RDFParser.java:206)\n\tat com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:609)\n\tat com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)\n\tat com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2967)\n\tat com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)\n\tat com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)\n\tat com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)\n\tat com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)\n\tat com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)\n\tat com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)\n\tat com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)\n\tat com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)\n\tat com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:327)\n\tat org.semanticweb.owlapi.rdf.rdfxml.parser.RDFParser.parse(RDFParser.java:145)\n\tat org.semanticweb.owlapi.rdf.rdfxml.parser.RDFXMLParser.parse(RDFXMLParser.java:73)\n\tat uk.ac.manchester.cs.owl.owlapi.OWLOntologyFactoryImpl.loadOWLOntology(OWLOntologyFactoryImpl.java:220)\n\tat uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.actualParse(OWLOntologyManagerImpl.java:1254)\n\tat uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntology(OWLOntologyManagerImpl.java:1208)\n\tat uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntologyFromOntologyDocument(OWLOntologyManagerImpl.java:1151)\n\tat org.obolibrary.robot.IOHelper.loadOntology(IOHelper.java:476)\n\tat org.obolibrary.robot.CommandLineHelper.getInputOntology(CommandLineHelper.java:489)\n\tat org.obolibrary.robot.CommandLineHelper.updateInputOntology(CommandLineHelper.java:583)\n\tat org.obolibrary.robot.CommandLineHelper.updateInputOntology(CommandLineHelper.java:541)\n\tat org.obolibrary.robot.ConvertCommand.execute(ConvertCommand.java:130)\n\tat org.obolibrary.robot.CommandManager.executeCommand(CommandManager.java:248)\n\tat org.obolibrary.robot.CommandManager.execute(CommandManager.java:192)\n\tat org.obolibrary.robot.CommandManager.main(CommandManager.java:139)\n\tat org.obolibrary.robot.CommandLineInterface.main(CommandLineInterface.java:58)\nmake: *** [mirror/chebi.owl] Error 1\n'
Traceback (most recent call last):
  File "/tools/odk.py", line 770, in <module>
    cli()
  File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/tools/odk.py", line 735, in seed
    runcmd("cd {}/src/ontology && make && git commit -m 'initial commit' -a && make prepare_initial_release && git commit -m 'first release'".format(outdir))
  File "/tools/odk.py", line 764, in runcmd
    raise Exception('Failed: {}'.format(cmd))
Exception: Failed: cd target/demo/src/ontology && make && git commit -m 'initial commit' -a && make prepare_initial_release && git commit -m 'first release'

Trying to add it to DEMO afterward: copied the ecocore version of chebi_import.owl then ran: ./run.sh make all_imports didn't work (didn't modify the chebi_import.owl file), try instead using the ECOCORE version of chebi_terms.txt. Failed again with memory error:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
	at uk.ac.manchester.cs.owl.owlapi.OWLDataFactoryImpl.getOWLAnnotationAssertionAxiom(OWLDataFactoryImpl.java:1673)
	at Makefile:298: recipe for target 'mirror/chebi.owl' failed
...
make: *** [mirror/chebi.owl] Error 1

had an issue with the ~/.gitconfig file it though it was a directory, so I followed the directions here and moved the old version, then made a new ~/.gitconfig file from the one on my computer

# This is Git's per-user configuration file.
[user]
# Please adapt and uncomment the following lines:
        name = Kai Blumberg
        email = [email protected]

That worked to run the ODK example triffo

Got this verions of project.yaml to run too: just needed a better comp did it with a medium3 (CPU: 4, Mem: 32 GB, Disk: 240 GB root) atmosphere vm with linux/docker, specifically Ubuntu_CCTools_Docker

id: demo
title: Demonstration Ontology
github_org: kaiiam
repo: demo
import_group:
  products:
    - id: ro
    - id: bfo
    - id: pato
    - id: envo
    - id: bco
    - id: chebi

06.16

  • Meeting with Ramona june 16th:

    • SRC project: (want functional version by the end of the summer)

      • Priyanka to send me her data headers. (double check with her)
        • Is the Env. concentrations sheet of the SRC Variables her data? She wasn't sure but she though it should be. COMBO of Priyanka Monica and Chris, maybe not comprehensive. Check to make sure it's all there, split it up into the sheets for each project.
          • for Priyanka: data for plants in mine tailing w/wout compost, has metal [] in plain compost might have it in minetailing/compost. Also has bulk tissue samples of plant roots, (vascular) leaves and shoots (ramona to add). for concentrations. Also has rna seq gene expression data might not may it to ontology terms.

          • Is this list correct in terms of metals measured for that project? Can I start making these as well?

      • Monica's data: (ramona will send me the links/find it from ken)
          1. garden soil also has metal chemical concentrations. in plants (garden) soil and (?tap) water.
          1. ACS Ken Dorsey and I all agreed to look at it to try and figure out what census vars to get, I'll help Dorsey learn some basic R to try and use the tidycensus package to explore the data.
        • Can explore SDGIO but it doesn't seem to have what we need, would need to add it, I think for sake of time I'll start with adding the terms to srpdio. OK TO MAKE IN SRPDIO for now.
      • CSM abundance data taxa counts -> need to figure out how to represent this maybe in BCO just highest level of taxonomic resolution?
      • The Other env. qualities sheet from SRC Variables what dataset is this associated with? Priyanka's?
    • USGS water data metal concentrations in water, for Monica separate form the garden roots. (should be same metal in maybe another material)

    • CDNO

      • Launch CDNO
      • Prep concentration of dp
      • figure out/prep hierarchical terms
      • figure out target food product or harvested material type terms for use in the concentration terms
    • isamples

      • I talked to Kirsten's postdoc and had suggested IGSN reuse everything the can from MIxS, is this the plan? If so if you fund me on this can I help develop this along with MIxS?
  • looking arcorss datastandards to store and parse interdicipliary data in a meaniful way.

Notes

Ramona is gone till july 6th, maybe checking email prior.

Garden sites is this Priyankas data? rna seq in mine tailing and composted amended mine tailings, plant part data too. CSM has the invertebrate data as well.

isamples internet of samples specimens needs unique resolvable identifiers. Measure lenth of fossil skeleton publish photo, data from sample can have long lifetime but they aren't connected, where was the rock from where is it now, cant currently connect derived data and images, lack of GPUIdentifiers. Shared standards: common ID, location. Mint GUID, locally indexing, connect local to global iSamples central.

06.18

plant-trait-ontology composition pattern tsv, which is similar to the concentration of DOSDP but it uses composition instead of concentration but otherwise the axiom follows the same pattern: for example root system cadmium content

06.25

CHEBI Developer Manual explains stars raiting 2 is for usersubmitted 3 is their team has currated it.

06.29

Dooley, Damion did comparison of OBI, QUDT and OM a while back, diagrammed at bottom of https://github.com/pato-ontology/pato/issues/101

trying to run CHEBI for SRPDIO got all the other ontologies imported but when I ran it with CHEBI sudo ./run.sh make all_imports_owl I got the following error:

Ontology IRI cannot be null
Use the -vvv option to show the stack trace.
Use the --help option to see usage information.
Makefile:225: recipe for target 'imports/chebi_import.owl' failed
make: *** [imports/chebi_import.owl] Error 1
rm imports/to_terms_combined.txt imports/chebi_terms_combined.txt imports/pato_terms_combined.txt imports/pco_terms_combined.txt imports/bco_terms_combined.txt imports/po_terms_combined.txt imports/sdgio_terms_combined.txt imports/envo_terms_combined.txt imports/uo_terms_combined.txt imports/ro_terms_combined.txt

Commented out all the other terms, added the list of chebi imports to the .txt file and am trying it again starting at 4:29.

07.03

Examining how to ammend an ODK makefile to add a DOSDP. Checking out the ENVO Makefile as an example, I think we'll need the following:

MODS = entity_attribute_location entity_quality_location entity_attribute process_attribute chemical_concentration
ALL_MODS_OWL = $(patsubst %, modules/%.owl, $(MODS))
ALL_MODS_CSV = $(patsubst %, modules/%.csv, $(MODS))

I think that this can be added to an ODK makefile with the following: adding the list of dosdps to the PATTERNS = line

PATTERNS = 

PATTERN_ROOTS = $(patsubst %, $(PATTERNDIR)/%, $(PATTERNS))
PATTERN_FILES = $(foreach n,$(PATTERN_ROOTS), $(n).owl)

all_patterns: $(PATTERN_FILES)

ENVO also has these lines that I think we'll need.

all_modules: all_modules_owl all_modules_obo
all_modules_owl: $(ALL_MODS_OWL)

modules/%.owl: modules/%.csv patterns/%.yaml curie_map.yaml
	dosdp-tools --table-format=csv --template=./patterns/$*.yaml --outfile=modules/$*.tmp.owl --obo-prefixes=true generate --infile=modules/$*.csv
	$(ROBOT) annotate --input modules/$*.tmp.owl -O http://purl.obolibrary.org/obo/envo/modules/$*.owl --output modules/$*.owl && rm modules/$*.tmp.owl

I think that the all_modules_owl: $(ALL_MODS_OWL) line will produce a list from all of the entries in MODS = in an format like: modules/entity_attribute_location.owl, which I'm guessing will get compiled by the modules/%.owl: target? line. This line then links to the module csv, pattern yaml and curie_map.yaml needed to compile the DOSDP.

07.09

From Matt S: http://merenlab.org/2020/06/27/seminar-series-on-microbial-omics/ seminar series on microbial omics.

Meeting with John deck:

https://github.com/biocodellc/ontology-data-pipeline Fork the ppo-data-pipeline

see the config file(s)

entity.csv has the OWL classes (from OBO) that we need. curley braces {} will look up label in the ontology, could also put the OBO purl

excludedtypes: to exclue some when reasoning

mapping.csv list of incoming columns, DarwinCore iris (which becomes a dataproperty), entity alias which links back to a class in entity.csv

relations.csv declres how clsses are related to one another, drives how the dataproperties are mapped to the entities and clsases but how the relations are built.

ppo.owl, the application ontology

rules.csv stuff like lat/long has to be float etc validation logic.

reasoner.conf: options about the type of reasoning you use

Example of data-preprocessing for the pipeline https://github.com/biocodellc/ppo-data-pipeline/blob/master/projects/npn/phenophase_descriptions.csv where they map the csv columns to ontology terms. I think this is what I need and was asking for. The Above files are more to do the rules for reasoning, setting up the whole ontology framework using an instance of a BCO observing process, attaching properties to it like lat long, then links to other classes or how they connect together. Whereas the proprocessing is the straight mapping between csv columns and ontology terms.

Meeting with Ramona

Add something like ppo-data preprocessing csv to UA-SRC-data/data_loaders perhaps to one of the variables.csv files ask Ken.

owltools command line examples, but owltools is depricated in favor of ROBOT so use robot merge instead.

CDNO nutritional framework, use a ROBOT template (see ENVO:ebi_biomes as example) that have columns for is a and derives from relationships. We might want to make an r/python script or office macro which pre-composes definitions based on the rows of the csv (like a dosdp would) to feed in as input to the ROBOT template. Then we can seperately make a subset that is in SKOK? Where we use SPARQL to turn the is a and derives from relationships into skos: broader or narrower? In order to haave Graham's desired hierarically structured controlled vocabulary.

Finally for the "wheat grain" terms, we could tentatively make a DOSDP which uses a derives from relationship between an NCBITaxon term and a PO term.

Make SRPDIO/ENVO concentration of x in environmental material terms, to help have those higher level terms for the reasoner to build a hierarchy under.

07.13

https://esip-bds.qiqochat.com/breakout/0/uiuqaDDXLRRZAMlbtlTQsygZA

ESIP SUMMER MEETING Homepage Qiquochat, ESIP figshare submit

07.17

https://scicrunch.org/resources RRIDs - identifiers for classes of instruments from Ted Habberman Also form ted check out the UD units which is the NetCDF units converstion toolkit.

07.21

Ramona says to complie list of potential journals for the reveiew paper to help structure the conclusion:

  • Plos Biology has Meta-Research Articles Themes include, but are not limited to, transparency, established and novel methodological standards, sources of bias (conflicts of interest, selection, inflation, funding, etc.), data sharing, evaluation metrics, assessment, reward, and funding structures. The data sharing & method standards seems appropriate. IF: 7.62

  • gigascience (Bonnie's favorite) has a review section: GigaScience has long supported efforts promoting the use of reporting guidelines (see more in our editorial, FAIRsharing recommendations, and our Editorial Policies and Reporting Standards page). For more detailed information on the Editorial Policies and reporting standards we follow see the policies pages of Oxford University Press and GigaScience. Might make sense. IF: 6.95

  • PLoS Computational Biology has a Reviews section maybe relevant? Reviews reflect rapidly advancing or topical areas in computational biology research that are of broad interest to the entire biology community and have not been covered significantly by other journals. A review should aim for 3000-6000 words and two or three figures or other display items. A review should not be a mere summary of the field; it should be a critique with new points of view which are supported by existing literature from a variety of authors. IF: 4.38

  • BMC environmental microbiome review Reviews are summaries of recent insights in Environmental Microbiome. Key aims of reviews are to provide systematic and substantial coverage of mature subjects, evaluations of progress in specified areas, and/or critical assessments of emerging technologies. As advised by http://www.standardsingenomics.org/ Publication of Standards in Genomic Sciences has been transferred to a new publisher, BioMed Central on behalf of the Genomic Standards Consortium. All submissions to the journal should now be made directly through the new BioMed Central Standards in Genomic Sciences site. Standards in Genomic Sciences redirects back to BMC environmental microbiome NO IF.

Be cool but would have to request it

  • Genome Biology review Reviews published by Genome Biology are authoritative syntheses of topics of current interest, written by leaders in the field. Most Reviews are commissioned by the Genome Biology editors and we do not encourage uncommissioned submissions for this type of article. However, potential authors are welcome to email a presubmission enquiry to [email protected] aim and scope page says they do reviews: Areas covered include, but are not limited to: sequence analysis; bioinformatics; insights into molecular, cellular and organismal biology; functional genomics; epigenomics; population genomics; proteomics; comparative biology and evolution; systems and network biology; genome editing and engineering; genomics of disease; and clinical genomics. IF: 12.16

  • Nucleic Acids Research (NAR) has survey and summary I'm unsure about this. This section accommodates reviews and analyses relevant to the journal’s core area of interest in nucleic acids and proteins involved in nucleic acid metabolism and/or interactions. A typical Survey and Summary occupies about 15 printed pages, with 4-6 display items and 100-150 references, but shorter and more focused contributions are also welcome. Articles should be written in a way to draw in a diverse audience of specialists and non-specialists and should provide critical analyses, a synthesis of ideas, and new insights. Articles that catalog work in a field without critical discussion are discouraged. Survey and Summary authors should have a track record of publication within the field covered by the article. Prospective authors are encouraged to browse recent articles published in the Survey and Summary category to gain a sense of format and style. Although many Surveys and Summaries are by invitation, the journal also welcomes unsolicited proposals. For unsolicited contributions, a presubmission enquiry should be sent to Dr. Julian Sale (Email: [email protected]). This should include a title, outline, proposed submission date, and a brief summary of the authors’ background and qualifications. IF: 11.14

  • BMC Biology aims review Reviews published by BMC Biology are authoritative syntheses of topics of current interest, written by leaders in the field. Most Reviews are commissioned by the BMC Biology editors and we do not encourage uncommissioned submissions for this type of article. However, potential authors are welcome to email a presubmission enquiry to [email protected]. IF: 6.45 CHECK out BMC Genomics and BMC Genetics

  • Bioinformatics has reviews but: Reviews (3-8 pages): Most review papers are commissioned, although the editors welcome suggestions from prospective authors who should in the first instance submit a draft or abstract/summary no longer than a page. IF: 4.43

Maybe

  • Frontiers in Genetics has review Review articles cover topics that have seen significant development or progress in recent years, with comprehensive depth and a balanced perspective. Reviews should present a complete overview of the state of the art (and should not merely summarize the literature), as well as discuss the following: 1) Different schools of thought or controversies, 2) Fundamental concepts, issues, and problems, 3) Current research gaps, 4) Potential developments in the field. Review articles are peer-reviewed, have a maximum word count of 12,000 and may contain no more than 15 Figures/Tables. Authors are required to pay a fee (A-type article) to publish a Review article. Review articles should have the following format: 1) Abstract, 2) Introduction, 3) Subsections relevant for the subject, 4) Discussion. Review articles must not include unpublished material (unpublished/original data, submitted manuscripts, or personal communications) and may be rejected in review or reclassified, at a significant delay, if found to include such content. If we did this we couldn't include any MIxS usage stats from EBI/NCBI. IF: 3.36

  • BMC Bioinformatics review Reviews are summaries of recent insights in specific research areas within the scope of BMC Bioinformatics. Key aims of Reviews are to provide systematic and substantial coverage of a research or clinical area of wide interest or to present a critical assessment of a specified area. A review must focus on recent research and on a topic that is timely and relevant to the field. All Reviews published by BMC Bioinformatics are peer-reviewed. Most Reviews are commissioned by the Editor of BMC Bioinformatics and we do not encourage unsolicited submissions for this type of article. Review articles may be considered at the Editor’s discretion and their decision on consideration is considered final. IF: 2.61

  • Peer J has a Literature Review Doesn't give a ton of context about scope. IF: 2.34

  • BMC microbiome Review Reviews are summaries of recent insights in specific research areas within the scope of Microbiome. Key aims of reviews are to provide systematic and substantial coverage of mature subjects, evaluations of progress in specified areas, and/or critical assessments of emerging technologies. aims NO IF.

NOPE

  • Genome Research Looks like it does reviews but none of the topics they are interested in are super relevant.

  • PLoS One? has Collection Reviews but I'm unclear what they mean by collection. From here it also has Systematic reviews whose methods ensure the comprehensive and unbiased sampling of existing literature. but I can't find any more info.

  • F1000 guidelines Reviews should provide a balanced and comprehensive overview of the latest discoveries in a particular field. Note that Faculty Reviews are by invitation only.

  • elife reviews are invitation only

07.23

Meeting with Chris and Stilian on Fairprotax

http://obofoundry.org/ontology/micro https://www.ebi.ac.uk/ols/ontologies/micro https://msphere.asm.org/content/2/4/e00237-17 https://www.nature.com/articles/s41597-020-0497-4 https://www.rhea-db.org/

08.19

https://wptron.com/use-a-generic-usb-2-0-10100m-to-ethernet-adaptor-on-macos-10-12-sierra/