CEGMA is dying…just very, very slowly

This is my first post on this blog in almost three years and it is now almost nine years since I could legitimately call myself a genomics researcher or bioinformatician.

However, I feel that I need to 'come out of retirement' for one quick blog post on a topic that has spanned many others…CEGMA.

As I outlined in my last post on this blog, the CEGMA tool that I helped develop back in 2005 and which was first published in 2007, continues to be used.

This is despite many attempts to tell/remind people not to use it anymore! There are better tools out there (probably many that I'm not even aware of). Fundamentally, the weakness of using CEGMA is that is based on an identified set of orthologs that was published over two decades ago.

And yet, every week I receive Google Scholar alerts that tell me that someone else has cited the tool again. We (myself and Ian Korf) should perhaps take some of the blame for keeping the software available on the Korf Lab website (I wonder how many other bioinformatics tools from 2007 can still be downloaded and successfully run?).

CEGMA citations (2011-2024)

When I saw that citations had peaked in 2017 and when I saw better tools come along, I thought it would be only a couple of years until the death knell tolled for CEGMA. I was wrong. It is dying…just very, very slowly. There were 119 citations last year and there have been 88 so far this year.

Academics (including former academics) obviously love to see their work cited. It is good to know that you have built tools that were actively used. But please, stop using CEGMA now! Myself and the other co-authors no longer need the citations to justify our existence.

Come back to this blog in another three years when I will no doubt post yet another post about CEGMA ('For the love of all that is holy why won't you just curl up and die!').

New BUSCO vs (very old) CEGMA

If I’m only going to write one or two blog posts a year on this blog, then it makes sense to return to my recurring theme of don’t use CEGMA, use BUSCO!

In 2015 I was foolishly optimistic that the development of BUSCO would mean that people would stop using CEGMA — a tool that we started developing in 2005 and which used a set of orthologs published in 2003! — and that we would reach ‘peak-CEGMA’ citations that year.

That didn’t happen. At the end of 2017, I again asked the question have we reached peak-CEGMA? because we had seen ten consecutive years of increasing publications.

Well I’m happy to announce that 2017 did indeed see citations to our 2007 CEGMA paper finally peak:

CEGMA citations by year (from Google Scholar)

CEGMA citations by year (from Google Scholar)

Although we have definitely passed peak CEGMA, it still receives over a 100 citations a year and people really should be using tools like BUSCO instead.

This neatly leads me to mention that a recent publication in Molecular Biology and Evolution describes an update to BUSCO:

From the introduction:

With respect to v3, the last BUSCO version, v5, features: 1) a major upgrade of the underlying data sets in sync with OrthoDB v10; 2) an updated workflow for the assessment of prokaryotic and viral genomes using the gene predictor Prodigal (Hyatt et al. 2010); 3) an alternative workflow for the assessment of eukaryotic genomes using the gene predictor MetaEuk (Levy Karin et al. 2020); 4) a workflow to automatically select the most appropriate BUSCO data set, enabling the analysis of sequences of unknown origin; 5) an option to run batch analysis of multiple inputs to facilitate high-throughput assessments of large data sets and metagenomic bins; and 6) a major refactoring of the code, and maintenance of two distribution channels on Bioconda (Grüning et al. 2018) and Docker (Merkel 2014).

Please, please, please…don’t use CEGMA anymore! It is enjoying a well-earned retirement at the Sunnyvale Home for Senior Bioinformatics Tools.

Three cheers for JABBA awards

jabba logo.png

These days, I mostly think of this blog as a time capsule to my past life as a scientist. Every so often though, I’m tempted out of retirement for one more post. This time I’ve actually been asked to bring back my JABBA awards by Martin Hunt (@martibartfast)…and with good reason!

There is a new preprint in bioRxiv…

I’m almost lost for words about this one. You know that it is a tenuous attempt at an acronym or initialism when you don’t use any letters from the 2nd, 3rd, 4th, or 5th words of the full software name!

The approach here is very close to just choosing a random five-letter word. The authors could also have had:

CLAMP: hierarChical taxonomic cLassification for virAl Metagenomic data via deeP learning

HOTEL: hierarcHical taxOnomic classificaTion for viral mEtagenomic data via deep Learning

RAVEN: hieraRchical tAxonomic classification for Viral metagenomic data via dEep learNing

ALIEN: hierArchical taxonomic cLassification for vIral metagEnomic data via deep learniNg

LARVA: hierarchicaL taxonomic classificAtion for viRal metagenomic data Via deep leArning

Okay, as this might be my only blog post of 2020, I’ll say CHEERio!

Damn and blast…I can't think of what to name my software

1441920213651.png

As many people have pointed out on Twitter this week, there is a new preprint on bioRxiv that merits some discussion:

The full name of the test that is the subject of this article is the Bron/Lyon Attention Stability Test. You have to admit that 'BLAST' is a punchy and catchy acronym for a software tool.

It's just a shame that is also an acronym for another piece of software that you may have come across.

It's a bold move to give your software the same name as another tool that has only been cited at least 135,000 times!

This is not the first, nor will it be the last, example of duplicate names in bioinformatics software, many of which I have written about before.