Postdoc and PhD positions in Cheminformatics at Jena University, Germany

One postdoc position and two phd positions are available in my newly founded research group at Jena University, Germany.

I am currently moving from my previous position as Head of Cheminformatics and Metabolism at the European Bioinformatics Institute (EBI) to the Institute for Analytical Chemistry as Professor for Analytical Chemistry, Cheminformatics and Chemometrics at Jena University. The successful candidates will help forming the nucleus of the new research group and work in an exciting network of local and international collaborations, such as the PhenoMeNal project funded by the European Commission in their Horizon2020 framework program.

Open Positions:

  1. Postdoc: We are looking for a talented cheminformatician, bioinformatician or someone with comparable skills to work on the development cloud-based methods for computational metabolomics. The successful candidate will work closely with the H2020 e-infrastructure project PhenoMeNal, a European consortium of 14 partners. This position requires excellent skills in at least one modern, object-oriented programming language. A strong interest in metabolomics and cloud computing as well as the ability to work in a distributed team will be advantageous. The postdoc will also have the opportunity to participate in the day-to-day management of the group as well as in the organisation of seminars and practical courses for our students.
  2. PhD student, biomedical information mining: In this phd project the candidate will combine methods of text mining, image mining and cheminformatics to extract information about metabolites and natural products from the published primary literature. This includes opportunities to work with the OpenMinTed consortium, where we have been leading the biomedical use case in the last 1.5 years, as well as with the ContentMine team.
  3. PhD student, cheminformatic prediction of natural product structures: Depending on skills and interests of the successful candidate, this project can target the problem of structure prediction of natural products and metabolite from either the side of spectroscopic information which one might have about an unknown natural product or starting from the genome of a natural product producing organism.

Both PhD positions require a strong interest in molecular informatics and current IT technologies, programming skills a modern object oriented programming language and the ability to work in geographically distributed teams.

Please send applications in PDF format by email to christoph.steinbeck@uni-jena.de. We will accept applications until the position is filled.

Background information:

The Friedrich Schiller University Jena (FSU Jena), founded in 1558, is one of the oldest universities in Europe and a member in the COIMBRA group, a network of prestigious, traditional European universities. The University of Jena has a distinguished record of innovations and resulting educational strengths in  major fields such as optics, photonics and optical technologies, innovative materials and related technologies, dynamics of complex biological systems and humans in changing social environments. It has more than 18,000 students. The university’s friendly and stimulating atmosphere and state-of-the-art facilities boost academic careers and enable excellence in learning, teaching and research. Assistance with proposing and inaugurating new research projects and with establishing public-private partnerships is considered a crucial point.

About Christoph Steinbeck

 

Alzheimer’s disease, PaDEL-Descriptor, CDK versions, and QSAR models

A new paper in PeerJ (doi:10.7717/peerj.2322) caught my eye for two reasons. First, it's nice to see a paper using the CDK in PeerJ, one of the journals of an innovative, gold Open Access publishing group. Second, that's what I call a graphical abstract (shown on the right)!

The paper describes a collection of Alzheimer-related QSAR models. It primarily uses fingerprints and the PaDeL-Descriptor software (doi:10.1002/jcc.21707) for it particularly. I just checked the (new) PaDeL-Descriptor website and it still seems to use CDK 1.4. The page has the note "Hence, there are some compatibility issues which will only be resolved when PaDEL-Descriptor updates to CDK 1.5.x, which will only happen when CDK 1.5.x becomes the new stable release." and I hope Yap Chun Wei will soon find time to make this update. I had a look at the source code, but with no NetBeans experience and no install instructions, I was unable to compile the source code. AMBIT is now up to speed with CDK 1.5, so the migration should not be too difficult.

Mind you, PaDEL is used quite a bit, so the impact of such an upgrade would be substantial. The Wiley webpage for the article mentions 184 citations, Google Scholar counts 369.

But there is another thing. The authors of the Alzheimer paper compare various fingerprints and the predictive powers of models based on them. I am really looking forward to a paper where the authors compare the same fingerprint (or set of descriptors) but with different CDK versions, particularly CDK 1.4 against 1.5. My guess is that the models based on 1.5 will be better, but I am not entirely convinced yet that the increased stability of 1.5 is actually going to make a significant impact on the QSAR performance... what do you think?


Simeon, S., Anuwongcharoen, N., Shoombuatong, W., Malik, A. A., Prachayasittikul, V., Wikberg, J. E. S., Nantasenamat, C., Aug. 2016. Probing the origins of human acetylcholinesterase inhibition via QSAR modeling and molecular docking. PeerJ 4, e2322+. 10.7717/peerj.2322

Yap, C. W., May 2011. PaDEL-descriptor: An open source software to calculate molecular descriptors and fingerprints. Journal of Computational Chemistry 32 (7), 1466-1474. 10.1002/jcc.21707

The Groovy Cheminformatics scripts are now online

My Groovy Cheminformatics with the Chemistry Development Kit book sold more than 100 times via Lulu.com now. An older release can be downloaded as CC-BY from Figshare and was "bought" 39 times. That does not really make a living, but does allow me to financially support CiteULike, for example, where you can find all the references I use in the book.

The content of the book is not unique. The book exists for convenience, it explains things around the APIs, gives tips and tricks. In the first place for myself, to help me quickly answer questions on the cdk-user mailing list. This list is a powerful source of answers, and the archive covers 14 years of user support:


One of the design goals of the book was to have many editions allowing me to keep all scripts updated. In fact, all scripts in the book are run each time I make a new release of the book, and, therefore, which each release of the CDK that I make a book release for. That also explains why a new release of the book currently takes quite a bit of time, because there are so many API updates at the moment, as you can read about in the draft CDK 3 paper.

Now, I had for a long time also the plan to make the scripts freely available. However, I never got around to making the website to go with that. I have given up on the idea of a website and now use GitHub. So, you can now, finally, find the scripts for the two active book releases on GitHub. Of course, without the explanations and context. For that you need the book.

Happy CDK hacking!

Generic Structure Depiction

Last week I attended the Seventh Joint Sheffield Conference on Chemoinformatics. It was a great meeting with some cool science and attendees. I had the pleasure of chatting briefly with John Barnard who's contributed a lot to the representation, storage, and retrieval of generic (aka Markush) structures (see Torus, Digital Chemistry - now owned by Lhasa).

At NextMove we've been doing a bit on processing sketches from patents (see Sketchy Sketches). I learnt a few things about how generic structures are typically depicted I thought be interesting to share.

Substituent Variation (R groups)


The most common type of generic feature is substituent variation, colloquially known as R groups. The variation allows concise representation with an invariant/fixed part of a compound and variable/optional part.


wherein R denotes

That is: anisole, toluene, or ethylbenzene.

Substituent Labels


Multiple substituent labels may be distinguished by a number R1, R2, ... Rn. However in reality, any label can and will be used. This can be particularly confusing when they collide with elements, examples include: Ra (Radium), Rg (Roentgenium) B (Boron), D (Deuterium), Y (Yttrium), W (Tungsten). The distinction between the label Ra and Radium may be semantically captured by a format but lost in depiction.

To distinguish such labels we can style them differently. By using superscripting and italicizing the label the distinction becomes clear and also somewhat improves the aesthetics of numbered R groups. We avoid subscript due to ambiguities with stoichiometry, for example: –NR2.

Attachment Points


For substituents there are different notation options. In writing, radical nomenclature is used, for the above example we'd say: methyl-oxyl (-OMe), ethyl (-Et), or methyl (-Me). However this doesn't translate well to depictions: .

The CTfile actually does stores substituents this way and specifies the attachment point (APO) information separately.

$RGP
1
$CTAB
2 1 0 0 0 0 999 V2000
1.9048 -0.0893 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
2.6192 0.3232 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
M APO 1 1 1
M END
$END CTAB
$CTAB
1 0 0 0 0 0 999 V2000
1.9940 -1.2869 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
M APO 1 1 1
M END
$END CTAB
$CTAB
2 1 0 0 0 0 999 V2000
1.8750 -2.3286 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.5895 -1.9161 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
M APO 1 1 1
M END
$END CTAB
$END RGP
Alternatively we may use a virtual or 'null' atom. We can convert to/from CTfile format although it's slightly easier to delete the null atom that add it on, due to coordinate generation. A disadvantage of this is the atom count isn't accurate, however the labelled group is also a type of null atom and already distorts the atom count. There are unfortunately different ways of depicting this null atom.
Don't use a dative bond style! You have to fudge the valences and just doesn't work, how would I show a double bond attachment?

The first time I'd encountered attachment points was in ChEBI where and R group means 'something attaches here' (CHEBI:58314, CHEBI:52569), whilst a 'star' label means 'attaches to something' (CHEBI:37807, CHEBI:32861). This actually a nice way of thinking about it, like two jigsaw pieces the asymmetry allows the substituent to connect to the labelled atom.

The 'star' atom used by ChEBI is tempting to use as there is a star atom in SMILES.

*OC
*C
*CC

However a '*' in SMILES actually means 'unspecified atomic number', some toolkits impose additional semantics. ChemAxon reads a 'star' to mean 'any atom', whilst OEChem, Indigo, and OpenBabel actually read more like an R Group, with [*:1] and [*:2] being R1 and R2 etc. ChemAxon Extended SMILES allows us to explicitly encode attachment points.

*OC |$_AP1$|
*C |$_AP1$|
*CC |$_AP1$|
I opted to implement the wavy line notation in CDK which is preferred by IUPAC graphical representation guidelines.
A major disadvantage of this notation is mis-encoding by users mistaking it for a wavy up/down stereo bond. I talk more about this in the poster (Sketchy Sketches) but the number of times you see the following drawn:
The captured connection table for that sketch does not have null atoms but instead uses carbon:

CC-BY with the ACS Author Choice: CDK and Blue Obelisk papers liberated

Screenshot of an old CDK-based
JChemPaint, from the first CDK paper.
CC-BY :)
Already a while ago, the American Chemical Society (ACS) decided to allow the Creative Commons Attribution license (version 4.0) to be used on their papers, via their Author Choice program. ACS members pay $1500, which is low for a traditional publisher. While I even rather seem them move to a gold Open Access journal, it is a very welcome option! For the ACS business model it means a guaranteed sell of some 40 copies of this paper (at about $35 dollar each), because it will not immediately affect the sale of the full journal (much). Some papers may sell more than that had the paper remained closed access, but many for papers that sounds like a smart move money wise. Of course, they also buy themselves some goodwill and green Open Access is just around the corner anyway.

Better, perhaps, is that you can also use this option to make a past paper Open Access under a CC-BY license! And that is exactly what Christoph Steinbeck did with five of his papers, including two on which I am co-author. And these are not the least papers either. The first is the first CDK paper from 2003 (doi:10.1021/ci025584y), which featured a screenshot of JChemPaint shown above. Note that in those days, the print journal was still the target, so the screenshot is in gray scale :) BTW, given that this paper is cited 329 times (according to ImpactStory), maybe the ACS could have sold more than 40 copies. But for me, it means that finally people can read this paper about Open Science in chemistry, even after so many years. BTW, there is little chance the second CDK paper will be freed in a similar way.

The second paper that was liberated this way, is the first Blue Obelisk paper (doi:10.1021/ci050400b), which was cited 276 times (see ImpactStory):


This screenshot nicely shows how readers can see the CC-BY license for this paper. Note that it also lists that the copyright is with the ACS, which is correct, because in those days you commonly gave away your copyright to the publisher (I have stopped doing this, bar some unfortunate recent exceptions).

So, head over to your email client and email support@services.acs.org and let them know you also want your JCICS/JCIM paper available under a CC-BY license! No excuse anymore to make your seminal work in cheminformatics not available as gold Open Access!

Of course, submitting your new work to the Journal of Cheminformatics is cheaper and has the advantage that all papers are Open Access!

Postdoc in Bioinformatics for Anti-Obesity Strategy

Courtesy of Frank Genten

Courtesy of Frank Genten

The Steinbeck Group at European Bioinformatics Institute (EMBL-EBI, Cambridge, UK), together with the Lab of Tony Vidal-Puig (U. Cambridge/ WT Sanger), are excited to announce a joint opening for a Post-doc position to work on a multi-omics project to identify new players in the human brown and beige adipocyte recruitment as an anti-obesity strategy.  The project involves data analysis of multi-omics data sets (metabolomics, transcriptomics, proteomics, among others) and integration of that data into different mathematical modelling frameworks (discrete logical models, kinetic ODE-based, FBA). With these models and data, the fellow will identify novel pharmaceutical strategies to induce BAT generation/WAT browning. The models will be used to evaluate in silico the potential effect of drugs on adipocytes. Finally, the best candidate molecules will be applied to the human pluripotent stem cell models to confirm their capacity to induce brown/beige adipogenesis in-vitro. Experiments will be performed with the support of experts at WTSI.

Experience in applying mathematical modelling techniques is desirable, as well as previous exposure to large data sets, but it is not expected of course that the candidate has expertise in all of the listed above, as training will be given on parts were the applicant has less experience. 

The EMBL-EBI is part of the European Molecular Biology Laboratory (EMBL) and it is a world-leading bioinformatics centre providing biological data to the scientific community with expertise in data storage, analysis and representation. EMBL-EBI provides freely available data from life science experiments, performs basic research in computational biology and offers an extensive user training programme, supporting researchers in the academic and industrial sectors.

EMBL-EBI and Wellcome Trust Sanger Institute share the Wellcome Genome Campus. This proximity fosters close collaborations and contributes to an international and vibrant campus environment. Researchers are supported by easy access to scientific expertise, well-equipped facilities and an active seminar programme.

The EMBL-EBI–Sanger Postdoctoral (ESPOD) Programme builds on the strong collaborative relationship between the two institutes, offering projects which combine experimental (wet lab) and computational approaches.

Please apply here: http://www.embl.de/jobs/searchjobs/index.php?ref=EBI_00732&newlang=1

I have absolutely no clue why this paper is citing the CDK...

I have absolutely no clue why this paper is citing the CDK...

Corrosion behaviors and effects of corrosion products of plasma electrolytic oxidation coated AZ31 magnesium alloy under the salt spray corrosion test

Screen reader users, click here to load entire articleThis page uses JavaScript to progressively load the article content as a user scrolls. Screen reader users, click the load entire article button to bypass dynamically loaded article content. Please note that Internet Explorer version 8.x will ...

Re: How should we add citations inside software?

Practice is that many cite webpages for the software, sometimes even just list the name. I do not understand why scholars do not en masse look up the research papers that are associated with the software. As a reviewer of research papers I often have to advice authors to revise their manuscript accordingly, but I think this is something that should be caught by the journal itself. Fact is, not all reviewers seem to check this.

In some future, if publishers would also take this serious, we will citation metrics for software like we have to research papers and increasingly for data (see also this brief idea). You can support this by assigning DOIs to software releases, e.g. using ZENODO. This list on our research group's webpage shows some of the software releases:


My advice for citation software thus goes a bit beyond what traditionally request for authors:

  1. cite the journal article(s) for the software that you use
  2. cite the specific software release version using ZENODO (or compatible) DOIs

 This tweet gives some advice about citing software, triggering this blog post:
Citations inside software
Daniel Katz takes a step further and asked how we should add citations inside software. After all, software reuses knowledge too, stands on algorithmic shoulders, and this can be a lot. This is something I can relate to a lot: if you write a cheminformatics software library, you use a ton of algorithms, all that are written up somewhere. Joerg Wegner did this too in his JOELib, and we adopted this idea for the Chemistry Development Kit.

So, the output looks something like:


(Yes, I spot the missing page information. But rather than missing information, it's more that this was an online only journal, and the renderer cannot handle it well. BTW, here you can find this paper; it was my first first author paper.)

However, at a Java source code level it looks quite different:


The build process is taking advantage of the JavaDoc taglet API and uses a BibTeXML file with the literature details. The taglet renders it to full HTML as we saw above.

Bioclipse does not use this in the source code, but does have the equivalent of a CITATION file: the managers, that extend the Python, JavaScript, and Groovy scripting environments with domain specific functionality (well, read the paper!). You can ask in any of these scripting languages about citation information:

    > doi bridgedb

This will open the webpage of the cited article (which sometimes opens in Bioclipse, sometimes in an external browser, depending on how it is configured).

At a source code level, this looks like:


So, here are my few cents. Software citation is important!

The quality of SMILES strings in Wikidata

Russian Wikipedia on tungsten hexacarbonyl.
One thing that machine readability adds, is all sorts of machine processing. Validation of data consistency is one. For SMILES strings, one of the things you can do is test of the string parses at all. Wikidata is machine readable, and, in fact, easier to parse than Wikipedia, for which the SMILES strings were validated recently in a J. Cheminformatics paper by Ertl et al. (doi:10.1186/s13321-015-0061-y).

Because I was wondering about the quality of the SMILES strings (and because people ask me about these things), I made some time today to run a test:
  1. SPARQL for all SMILES strings
  2. process each one of them with the CDK SMILES parser
I can do both easily in Bioclipse with an integrated script:

identifier = "P233" // SMILES
type = "smiles"

sparql = """
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT ?compound ?smiles WHERE {
  ?compound wdt:P233 ?smiles .
}
"""
mappings = rdf.sparqlRemote("https://query.wikidata.org/sparql", sparql)

outFilename = "/Wikidata/badWikidataSMILES.txt"
if (ui.fileExists(outFilename)) ui.remove(outFilename)
fileContent = ""
for (i=1; i<=mappings.rowCount; i++) {
  try {
    wdID = mappings.get(i, "compound")
    smiles = mappings.get(i, "smiles")
    mol = cdk.fromSMILES(smiles)
  } catch (Throwable exception) {
    fileContent += (wdID + "," + smiles + ": " +

                   exception.message + "\n")
  }
  if (i % 1000 == 0) js.say("" + i)
}
ui.append(outFilename, fileContent)
ui.open(outFilename)

It turns out that out of the more than 16 thousand SMILES strings in Wikidata, only 42 could not be parsed. That does not mean they are correct, but it does mean the are wrong. Many of them turned out to be imported from the Russian Wikipedia, which is nice, as it gives me the opportunite to work in that Wikipedia instance too :)

At this moment, some 19 SMILES still need fixing (the list will chance over time, so by the time you read this...):

Maximally Bridging Rings (or, Doing What the Authors Should’ve Done)

Recently I came across a paper from Marth et al that described a method based on network analysis to support retrosynthetic planning, particularly for complex natural products. I’m no synthetic chemist so I can’t comment on the relevance or importance of the targets or the significance of the proposed approach to planning a synthetic route. What caught my eye was the claim that

This work validates the utility of network analysis as a starting point for identifying strategies for the syntheses of architecturally complex secondary metabolites.

I was a little disappointed (hey, a Nature publication sets certain expectations!) that the network analysis was fundamentally walking the molecular graph to identify a certain type of ring, termed the maximally bridging ring. The algorithm is described in the SI and the authors make it availableCopaene
as an online tool. Unfortunately they didn’t provide any source code for their algorithm, which was a bit irritating, given that the algorithm is a key component of the paper.

I put together an implementation using the CDK (1.5.12), available in a Github repo. It’s a quick hack, using the parameters specified in the paper, and hasn’t been extensively tested. However it seems to give the correct result for the first few test cases in the SI.

The tool will print out the hash code of the rings recognized as maximally bridging and also generate an SVG depiction with the first such ring highlighted in red, such as shown alongside. You can build a self-contained version of the tool as

1
2
3
git clone git@github.com:rajarshi/maxbridgerings.git
cd maxbridgerings
mvn clean package

The tool can then be run (with the depiction output to Copaene.svg)

1
2
java -jar target/MaximallyBridgingRings-1.0-jar-with-dependencies.jar \
  "CC(C)C1CCC2(C3C1C2CC=C3C)C" Copaene