Posted by: Kay at Suicyte | December 15, 2013

Submission time again

Four years since my last post, just pretending nothing happened since then …

Just trying to submit a manuscript online and struggling with the online submission system at Cell press – probably the same kind of problems at other places. I can understand that the final paper must not exceed a certain length, but is it really necessary to count letters (including spaces, for sure) for every stupid free-text input field? Oh well, it is probably not called ‘online submission’ for nothing. Here is a definition from the Longman Dictionary of Contemporary English:

sub‧mis‧sion

the state of being completely controlled by a person or group, and accepting that you have to obey them

Posted by: Kay at Suicyte | March 23, 2009

The perils of having foreign co-authors… (just kidding)

Here is what communication problems can do for you:

This morning, I had a phone conversation with a scientist abroad, talking about a joint paper project. At the end, we agreed on who was going to write which part of the manuscript. My colleague, let’s call him Peter, said something along the lines of “I will write the introduction and will create figure 1, and you should do figures 2 and 3″. I wasn’t too enthusiastic because figure 2 looked like a lot of work, but I agreed.

Only after spending several hours on this blasted figure 2, and another phone call, it turned out that Peter had not said “and you should do figures 2 and 3″, but rather “and Yu should do figures 2 and 3″.

Now we have two versions of figure 2, one by me and one by Yu.

Sigh.

Posted by: Kay at Suicyte | December 23, 2008

Eugene Koonin direct

I have just read the blog post by Derek Lowe ‘Publishing your work the Easy Way‘, which covers the case of M.S. El Naschie, who is Editor-in-Chief of the journal Chaos, Solitons & Fractals and apparently uses this position to publish tons of his own papers in that journal. Nature has also covered this case before, where it has been said that the papers are mostly of poor quality and that El Naschie might also have ‘improved’ his CV by using a wrong affiliation to a respectable institute. I am neither an expert on solitons or fractals (chaos maybe, if you consider the state of my desktop) and I am clearly not the one to  judge the quality of El Naschie’s contributions. Check here for some scientific details.

Reading about this story made me think about how acceptable it is to publish stuff in a journal that you have founded yourself and where you are acting as editor-in-chief. Before knowing about the El Naschie affair, I would not have seen problems here. Sure, you could probably select your own reviewers or bypass peer review altogether. But editors-in-chief are typically prominent scientists (right?) and prominent scientists don’t behave that way. Or do they? Maybe I have been too naive. Obviously, I don’t have any fist-hand experience as I never founded my own journal and no publisher in their right mind would ask me to act as editor-in-chief of anything.  On the other hand, I am on the editorial board of a few journals, and in this role you are typically asked by the publisher to ‘submit your best manuscripts’ to the journal in order to make the world a better place. And for increasing the impact factor.

Another reason for my somewhat careless attitude to ‘self-publishing’ is that I have seen several examples of excellent papers published in self-edited journals. Just by looking at the numbers alone, some journals might appear to be in a situation similar to  Chaos Solitons & Fractals. Maybe not quite the same, if the amazing number of 321 El Nashie papers in ‘CSF’ mentioned in Derek’s post is true.

There is one journal in my area, which is dominated by papers from the chief editor:  Biology Direct. Just look at the numbers: Since its inception in 2006, Biology Direct has published 24 papers from Eugene Koonin. This might appear small compared to 321,  but if you consider that the journal has published only 131 papers altogether,  Eugene Koonin has authored more than 18% of all the published manuscripts. Before you jump to wrong conclusions, there are several big differences to the El Naschie case: Eugene Koonin also publishes scores of papers in inconspicuous journals, including Nature and their ilk, he really works at the NCBI (I have proof of that), and his papers in Biology Direct do make sense (to me, at least). Most importantly, there is undeniable evidence that Eugene did not use his position to bypass peer review: Biology Direct has an open peer review policy, the reviewers’ comments are published with the papers!

Post scriptum: After checking out the web page of El Naschie, I have the strong  feeling that there is some problem with this guy. Even without understanding a single equation of his E-learning E-Infinity  theory, someone who sees himself as a  ‘central figure in the field of fractal cosmology’ and at the same time publishes on how to slow the aging process cannot expect to be taken seriously.

Posted by: Kay at Suicyte | December 21, 2008

Ubiquitin on Lake Garda, reloaded

Hardcore readers of this blog might remember my reports on the EMBO conference “Ubiquitin and Ubiquitin-like modifiers in cellular regulation” in September 2007 in Riva del Garda. See e.g. here, here, and most scientifically here. As I wrote before, this was one of the best conferences I have been to so far, with an excellent scientific content, great atmosphere, and a beautiful setting.

I have just been informed that there will be 2nd such conference in September 2009, at the same venue in Riva del Garda. The 2009 title will be “Ubiquitin and Ubiquitin-like modifiers in health and disease“.

Again, the organizers (mainly the Rubicon folks) have assembled a very promising line-up of invited speakers,  including Aaron Ciechanover, Allan D’Andrea, Allan Weissman, Brenda Schulman, Chris Lima, Dan Finley, Frauke Melchior, Helle Ulrich, Heran Darwin, Hermann Schindelin, Ivan Dikic, Jeff Brodsky, Keiji Tanaka, Maria Masucci, Mark Hochstrasser, Matthias Peter, Michael Rape, Michele Pagano, MikeTyers, Paolo di Fiore, Ray Deshaies, Ron Kopito, Simona Polo, Stefan Jentsch, Sylvie Urbe, Wade Harper and Yosef Yarden. This is clearly going to be exciting.

Judging by the presence of Heran Darwin and Hermann Schindelin, the ubiquitin-like modifiers will this time also include their prokaryotic counterparts, including the mysterious Pup pathway in mycobacteria. Compared to 2007, there is another advantage for the prospective 2009 audience: this time you won’t have to sit through 30 min of ubiquitin bioinformatics. It looks like the organizers have learned their lessons and didn’t invite me any more. It is possible, though, that Matthias Peter is going to talk about our most recent collaborative project, which is going to hit the (scientific) tabloid press in January 2009. This one has bacteria in it, too.

Anyway, this is going to be an excellent opportunitly to learn what is going on in the ubiquitin field – especially if you are based in Europe.

Posted by: Kay at Suicyte | December 16, 2008

Microarrays may be bad, but not that bad.

I normally do not blog about topics related to my daytime job, which involves a lot of microarray data analysis. However, a series of recent blog posts [here, here and here] talk about microarray-related problems that differ so much from my own experiences that I cannot let them go uncommented.

I am the last person to claim that microarrays are a perfect tool for tackling all questions conceivable . They are not.  DNA microarrays can be seen as some kind of hammer that is being (rightfully) applied to a few nails, but unfortunately also to lots of objects with no nail-like properties whatsoever. Microarray data are problematic in many different ways. However, we should be careful not to throw out the baby with the bath water.

Here are the main points of criticism that have been raised in the recent posts, along with my comment. I might exaggerate to some extent, but this just serves to make my point more clear.

1) Microarrays are useless, because it has been shown that protein levels correlate poorly with mRNA levels. You hear this argument a lot, especially from the mass-spec people, who want to convince you that only they have a handle on the truth. I admit freely: microarray are mostly useless if you want to learn about protein levels. This is not a nail, go use another tool. You should use microarrays mainly if you are interested in mRNA levels. There are lots of interesting applications for that, e.g. learning which transcription factors are activated. Several stress responses, including those to toxic substances, lead to a dramatic and very specific induction of certain mRNAs. You don’t have to know if the corresponding proteins are really being made, the transcriptional response is the earliest and most specific indicator of many stress conditions. This knowledge can be very useful in its own right. Just don’t try to predict whether there is more protein A in the cell than protein B, just by looking at their microarray signals.

By the way, microarrays are somewhat better in judging changes of protein levels, rather than the protein levels themselves. But still, if protein levels are what you are after, your should turn to another tool.

2) Microrray experiments cannot be trusted because the statistical significance values are wrong. This argument is reiterated here, and the author certainly has a point. Somewhat surprisingly, the examples used in the blog post talk about genetic associations studies rather than the common gene-expression  microarrays. There also seems to be some confusion about the numbers of SNPs vs the number of genes. Nevertheless, the main problem is shared between GWAS and transcriptomics studies: a microarray gives you tons of data and chances that one of the genes appears as strongly regulated just by chance alone is substantial. On the other hand, this ‘multiple testing’ problem is well known in the microarray field and is routinely taken into account. There are methods to correct for the bias in p-value (best known is the ‘Bonferroni correction‘),  Thus, a situation similar to the one described in the blog post would certainly not reach a p-value of 0.05, at least not  in a responsible microarray analysis.

3) Batch effects play a major role and often conceal the real regulation. Admitted, there are batch effects. However, with modern microarray platforms and hybridization methods they can be be safely neglected – at least in comparison to other common noise sources. Obviously, batch effects depend on the technology used. I have experience with three different microarray platforms (two major vendors and and one type developed by the company I work for), and for each of them the batch effects were typically much smaller than the noise from sample preparation or inter-individual differences.

While we are talking about noise sources, here are what I consider the main offenders:

1) Sampling. Particularly problematic when dealing with surgical or biopsy samples. Are you sure that each of your biopsies samples exactly the same tissue structure? With the same relative proportions of cells?  Same amount of blood in the tissue samples? Least problematic when comparing things like treated and untreated cell lines.

2) Inter-individual differences. This problem is ofter under-appreciated but is slowly gaining publicity. Most problematic when dealing with human samples or other outbred (animal-)populations. The differences between ‘healthy’ tissue of two donors are often much more pronounced than between ‘healthy’ and ‘diseased’ tissue of the same donor. Less problematic when dealing with imbred strains or cell culture. Even then, there still might be inter-individual differences related to e.g. nutrition status, circadian effects, etc.

3) Extreme amplification protocols. For many microarray studies, the available material is severely limited. Compliance of tissue donors is often inversely correlated with the size of the biopsy needle. There are several protocols for getting sufficient cDNA for microarray analysis out of very small samples, and some of them are clearly better than others. However, all of them share one common problem: less starting material means more dramatic amplification, which in turn means more noise.

Needless to say that most of these problems can be overcome by using really large sample numbers. Unfortunately, this if often impossible due to limited availability of samples or money. As a consequence, we have to live with the shortcomings mentioned above. I usually recommend that microarray results should not be considered the final outcome of an experiment, but rather as a method for identifiying candidate genes that can be used for a more detailed follow-up study.

For the sake of full disclosure: if you haven’t noticed already, I am working for a company that sells microarrays, microarray services and microarray data analysis. Obviously, this affiliation might bias my view of things. Nevertheless, I speak only for myself and not for my employer. I have tried my best to keep this brief discussion as unbiased as possible, it is just meant to reflect my personal experiences from about 10 years of microarray data analysis.

Posted by: Kay at Suicyte | August 11, 2008

Chaperone-mediated autophagy

Nature Medicine is not one of the journals I usually follow. Today, several of my literature alerting services – and several press releases as well – pointed to a paper in the AOP section of Nature medicine that appeared to be a must-read. As usual, the press releases are vague about what the study actually does, but they contain all the words needed to get me interested: autophagy, ubiquitin-proteasome system, Alzheimer’s and Parkinson’s disease – and, of course, the ultimate cure for aging. If you are a liver, that is. The paper by Cong Zhang and Ana Maria Cuervo is entitled Restoration of chaperone-mediated autophagy in aging liver improves cellular maintenance and hepatic function, which in the press coverage becomes Cellular rubbish may hold key to ageing process or Big step in the fight against aging or In scientific first, Einstein researchers correct decline in organ function associated with old age.

Despite these gross exaggerations, I do not regret having read the paper, as it has introduced me to a potentially interesting concept called ‘chaperone-mediated autophagy’ (CMA). Judging by the name, this is something I should have known about before – I must admit that this is not the case. One of the reasons is that there appears to be only one group (and their offshoots) working on CMA, They seem to have discovered it, too. If you want to know the details, have a look at this open-access review by J. Fred Dice.

In brief, CMA is a weird way of degrading cytoplasmic proteins. No, not by the proteasome, by the lysosome! After reading the CMA review, there is one thing for sure: had it been my task to intelligently design nature, CMA is not a route I would have taken into consideration. Ok, maybe after a few bottles of a good Chianti. But then, there is ERAD, where lumenal proteins are squeezed back into the cytoplasm by some obscure mechanism, with the idea to degrade them by the proteasome. Why not use a similarly obscure mechanism to squeeze cytoplasmic proteins into the lysosomal lumen for degradation? Apparently, selected cytoplasmic proteins that harbour a certain motif (the KFERQ-related peptide) are recognized by cytoplasmic Hsc70 chaperones together with their usual set of co-chaperones (Hip, Hsp40, Hsp90, Bag-1). Next, this substrate/chaperone assembly is recognized by a receptor consisting of the lysosome-associated membrane protein LAMP-2a, a splice form of the LAMP2 gene. LAMP2 forms a multimeric complex in the lysosomal membrane, which is thought to act as the translocation pore. Apparently, lysosomal chaperones are also required for pulling the unfolded protein through the pore.

This intriguing mechanism has a fundamental flaw, though: it does not involve ubiquitin. As everybody knows, ubiquitin is required for every interesting biological process. Thus, there is clearly some more work to do. It would also help, if a completely unrelated group would contribute to the CMA field. As I have talked about before, I am always skeptic about new proteins/genes/pathways/processes that have only been observed by a single group.

What is the connection between CMA and aging? It had been previously observed (by the Cuervo group) that CMA is abolished if LAMP-2a is knocked down, and that both CMA and lysosomal LAMP-2a levels decline with age. In the new Nature Medicine paper, Zhang and Cuervo have generated a mouse model where the expression level of LAMP-2a in the liver can be modulated. When analyzing aged mice with a continuously high LAMP-2a expression in the liver, the authors found that CMA efficiency did not decline with age and that  the aged liver contained much less intracellular accumulation of damaged proteins. Also, liver function of aged mice with high LAMP-2a level was comparable to that of young mice and better than that of untreated mice at the same age. No matter what you think about CMA, this is a very interesting result.

Update: After doing some more searches, I noticed that there is much more literature on CMA than I previously thought, with different groups involved as well. The “KFERQ” motif has even been reviewed in TiBS as early as 1990. I really wonder how this could have eluded me.

Posted by: Kay at Suicyte | August 4, 2008

Search engines

Everybody seems to be blogging about new search engines these days. Most of them discuss the new CUIL search, which I found mostly disappointing. But so did everybody else. Over the last months, I have tried a couple of other search engines. What I typically do is a highly sophisticated benchmark involving a well-balanced testbed of three common search tasks:

  1. Search for the term “ubiquitin OR proteasome”. Check how many entries are found, and read all of them.
  2. Search for my own name. Vanity rules. High-scoring matches are typically 15-year old Usenet postings of mine, asking silly questions that nobody cares to answer. Runners-up are caused by other people appropriating my name, including one European scientist accused of scientific misconduct. Yuck. That wasn’t me, I promise!
  3. Search for the term “ataxin-3″ or “Rpn13″ (according to taste). If the highest scoring matches start like “comparison shop for ataxin-3 at cooldealz.com” or similar, the search engine gets bonus points.

Here are a few observations with notable search engines.

Google: Finds the largest number of entries, some of them sensible. Lists a certain amount of shopping sites, typically offering antibodies to everything biological that I throw at it (except for my name). For a while, the top entry on Rpn13 was a blog post here at my site, but this time is long gone and my blog post has disappeared in the abyss (i.e.  after position 10). What I hate about google is the microsoftesk attempt to outsmart the user by also showing results with ‘spelling corrections’ applied to, say, my name. Unfortunately, there are lots of people with a name similar to mine, and I am tired to see their pages on top when all I want to see is me, me, me.

Cuil: used to find a lot of entries, most of them with funny pictures of no obvious connection to the query. By now, the pictures seem to have gone, but so do a lot of the hits, too. As of today, the top hit on ‘ataxin-3′ points to a paper of mine (10 bonus points!), but strangely not to the paper itself but rather to a Nodalpoint page linking to the paper.

Gigablast: Not an NCBI product, despite the name. Ugly color scheme. There are two useful features: The categories on top are simple and intuitive and even seem to work (unlike some other search engines with built-in categorization). Even better: There is an easy way to filter for new entries (“freshness=2″ restricts search to entries from the last two days). This is really useful, and I haven’t figured out how to do it in google (although I guess that it is possible somehow)

Vadlo: I learned about this just today from GTO. Supposed to be dedicated to life science content, e.g. protocols and the like. Fails miserably on my benchmarks. I don’t want go buy ataxin-2 antibodies when searching for ataxin-3. And no, I didn’t mean ‘Rp13′ when I typed ‘Rpn13′, thank you very much. The main feature of this site seems to be a daily science cartoon. I had to gasp when looking at todays cartoon (see below, the original is here: Life_in_Research_Cartoon_1138.html). Are they really making fun about a guy who talks about ubiquitination, shows a structure slide that looks a lot like mine, and is supposedly unable to run a decent agarose gel? How could they possible know? But wait, I am much fatter than the guy in the cartoon, I don’t usually wear a tie, and there are more than 3 people in the audience. This cannot be me, what a relief.

Cartoon on stupid ubiquitin guy

. . . . . .

Posted by: Kay at Suicyte | June 12, 2008

Is it research? Or just data analysis?

Genome Technology blogs about the genome sequencing of Candidatus Korarchaeum cryptofilum, which appears to be an early-branching archaeon. Probably very interesting, although archaea are rarely the focus of my interest, I am mostly working on sequences from a species that should properly be called Candidatus Homo sapiens.

What caught my eye, though, was the statment on author contributions (isn’t this the part of a paper that everybody reads first?). Anyway, this is what they say:

Author contributions: J.G.E., P.R., M.K., and K.O.S. designed research; J.G.E., M.P., B.P.H., A.L., E.G., K.B., and G.W. performed research; J.G.E., M.P., D.E.G., K.S.M., Y.W., L.R., C.B.-A.,V.K., I.A., E.V.K., P.H., N.K., and K.O.S. analyzed data; and J.G.E., D.E.G., E.V.K., and K.O.S. wrote the paper.

You can probably guess what I am hinting at. Seven people “performed research”, while 12 people “analyzed data”, with only a small overlap. It is no surprise that a genome sequencing paper needs a lot of people doing data analysis – this is what genome papers are all about. So far I had assumed that the analysis of genome data should be considered as ‘research’, probably more so than cutting chromosomes into pieces, or operating lab robots and sequencing machines. Apparently, some people see this differently.

In hindsight, I should have known better. At some point in the past, I used to work in a research institute (a very nice place, by the way) with a revealing organization of their phone directory. If memory serves me, they had individual pages entitled “Research group X” and “Research group Y”, listing the PI, postdocs, and students. At the end, they had a page with no particular heading, which held entries for 1) Janitor, 2) Workshop 3) Graphics studio, 4) Bioinformatics. You see, we are just one of these facilities …

Posted by: Kay at Suicyte | June 7, 2008

Turning (part of) the proteasome on its head

I am a bit short on time, but I have seen that the Glickman paper on the proteasome base structure has finally appeared in print – this event should not go unnoticed. Before I begin to discuss the paper, I must admit that I haven’t really read it – I have heard Michael talk about this model at least three times, and had lengthy discussions with him and others during the Lake Garda meeting. The new model, now published in Nature Structural and Molecular Biology, departs from the old dogma how subunits of the 19S proteasome regulator complex are arranged. Not surprisingly, reactions from the proteasome field are mixed. It is no coincidence that it took more than a year to get this story published.

Before describing the new model, let me briefly recount the conventional wisdom on proteasome architecture. In the center, there is the 20S core particle, whose structure is very well understood. It consists of four stacked rings of seven different subunits each. The two middle rings contain the beta subunits – three of them with catalytic activity, the other ones are closely related but inactive. The two outer rings contain the alpha subunits, which are distantly related to the beta subunits; however, none of the alphas is catalytically active.

The outer faces of the core particle, formed by the alpha subunits, can associate with at least two different regulatory particles. The more familiar (and probably also more important) one is the ’19S regulatory particle’. The 19S particle itself consists of a ‘base’ structure proximal to the 20S core proteasome, and a ‘lid’ sitting on top of the base. By the way, it was also Michael Glickman (while working with Dan Finley), who showed that base and lid are separate particles held together by the Rpn10/S5a subunit. The lid will probably be a topic for another post; here, I will only talk about the ‘base’ and how it attaches to the 20S core.

The ‘base complex’ consists of at least 10 proteins: 6 homologous AAA-ATPases forming a hexameric ring like most other AAA ATPases, 2 large proteins called Rpn1 and Rpn2 in yeast, and finally Rpn10/S5a, which is the classical ubiquitin receptor of the proteasome and is also required for holding the lid in place. A very recent addition is Rpn13, which is probably also a component of the base.

Archea have a proteasome very much like the eukaryotic one, the same is true for a group of eubacteria, the actinomycetes (which include the notorious Mycobacterium tuberculosis). These prokaryotic proteasomes are somewhat simpler: they contain only the 20S core (made from homomeric rings of only one alpha and beta subunit each) and a hexameric ring of ATPases very similar to the eukaryotic base. In prokaryotes, there is no doubt that the ATPase ring is stacked on top of the alpha ring of the 20S core, and the same architecture was assumed for the eukaryotic base. The big Rpn1 and Rpn2 proteins are usually depicted as blobs, sitting somewhere on top of the ATPase ring, together with Rpn10 forming a ‘hinge structure’ for attaching the lid.

This architecture has now been challenged by the new paper from Michael Glickman’s group in Haifa. He describes a number of results from very different approaches, which all converge on an alternative model in which Rpn1 and Rpn2 are more proximal to the 20S core than the AAA ATPases are. Apparently, the whole thing got started by a bioinformatical paper by Andrey Kajava, coincidentally a former office-mate of mine during my Lausanne years. Andrey had predicted that the repeat motifs of Rpn1 and Rpn2 would give the proteins a toroid structure. When this first paper came out, it was greeted by some skepticism, in particular from the cryo-EM people who did not see a convincing toroid electron density on top of the ATPase ring.

Rina Rosenzweig in Michael Glickman group took this idea one step further; they collaborated with Pawel Osmulski and Maria Gaczynska, who use atomic force microscopy (AFM) for studying macromolecular assemblies. According to the published AFM data, both Rpn1 and Rpn2 form individual rings. When analyzed together, a ring with the same diameter was observed, which was twice as high, suggesting that Rpn1 and Rpn2 stack on top of each other. Crosslinking experiments showed that Rpn2, but not Rpn1, appeared to be a direct neighbor of the 20S core particle: according to MS analysis, Rpn2 was crosslinked to five different alpha subunits.

AFM series of 20S proteasome and base

Finally, a series of atomic force micrographs of 20S core particles reconstituted with various factors, had been obtained. I think the figure to the right looks quite convincing. The rightmost panel shows the 20S core with a relatively flat surface. In the 2nd panel from the right, Rpn2 appears to form a circular structure on top of the 20S core. In the 3rd panel from the right, Rpn2 and Rpn1 combined result in a circular structure of higher profile (bottom row), while the addition of the full base particle, including the ATPases, does broaden but not further heighten the profile.

alternative model for proteasome base

The obvious model for explaining these results looks like the one shown on the left. The two rings of Rpn2 (yellow) and Rpn1(red) form a funnel-like structure on top of the ‘entrance’ of the 20S core particle (green). The six AAA ATPases (orange) form a ring encircling the funnel.

The two most important questions are i) is this model correct? and ii) will it have a strong impact on the future of proteasome research?

I mentioned above that the reactions to the new model and the underlying data have been mixed. During the last year, I have discussed it with several people from the field and heard comments ranging from ‘certainly correct, is in perfect agreement with my new data’ to ‘complete rubbish, my new data prove that the model is wrong’. I for myself find that the data look convincing, but I am not much of an expert in atomic force microscopy. I am aware of the specificity problems with photochemical crosslinking, but as far as I can tell this part of the paper doesn’t look too bad, either. So, judging by the data, I don’t think that the new model can be easily dismissed. What I don’t like about the model are two things. First, I find it hard to believe that the arrangement of core particle and ATPases should be different in eukaryotes and prokaryotes (which lack Rpn1/2). Second, I see a problem with Rpn2 being completely buried inside the proteasome structure. After all, Rpn13 has been found to attach to the complex via binding to Rpn2, which is not really compatible with its purely internal position. Michael (sort of) addressed this problem by admitting that some part of Rpn2 might stick out somewhere. I don’t know what to think, probably time will tell.

With regard to the relevance of the new model, it is interesting to note the complete absence of both associated ‘news & views’ material and press releases claiming new avenues to treat cancer. I find this somewhat surprising, as this new finding is much closer to a ‘gatekeeping’ role than Rpn13 is. It should be kept in mind that the entrance pore of isolated 20S proteasome core partciles is closed. For opening this pore, an interaction with the base complex is required. According to the new model, Rpn2 would be in the best position to do this job. I am convinced that – as soon as more data supporting the new model becomes available – a lot of the accepted truths about ubiquitin recognition by the proteasome and shuttling substrates into the 20S catalytic chamber will have to be reconsidered.

Addition: After discussing this post with Michael Glickman, I should point out that even in his new model there are contacts between the ATPases and the 20S core. Thus, the new model is not that much different from the archaeal proteasome. The major difference is that in the revised eukaryotic model, the opening in the ATPase ring is large enough to accomodate the toroidal Rpn2/Rpn1 subcomplex.

Posted by: Kay at Suicyte | June 1, 2008

Two degrees of co-authorship

I remember a piece of dialogue, which I have seen in at least two different movies (can’t remember which ones, though). It went roughly like this: First guy (mostly harmless wannabe-gangster): “Hi, my name is John, but my friends call me Sharky”!  Second guy (much cooler than first one): “My name is Jack, and I don’t have friends”.

This line reminds me to some degree of scientists doing social networking. I am not so much thinking of Facebook and the like, but rather of their scientific siblings like Nature Network an SciLink, as they seem to gain popularity in the scientific blogosphere (see here, here, here, here, and here). All of these services ask you for your affiliation, workplace and several other obvious and semi-obvious data. The scientifically inclined services also ask you for your publication list. This can be a bit tedious, depending on how prolific you are. I imagine that Eugene Koonin would have to hire a summer student for entering his publication list.

When all of these questions have been answered, you are asked to build a network by connecting your entry to that of ‘friends’ who are also represented in the system. This is where – at least for me – the problems begin. As everybody knows (or at least has always assumed), real scientists® don’t have friends. Just like the cool guy in the movie.

What we do have are colleagues, competitors, co-workers (and maybe some more c-words). In terms of scientific social networking, they would pass for friends,  I guess. The major problem here is that they are typically not present on the same services. And if they are, it can be very hard to find them. I tried it myself,  typing in just about everybody I know to be active in my area of work. The result – nada. Things look different with science bloggers – they all seem to be present. Almost everybody, almost everywhere.

I am obviously not in the networking business (and happy about it, too) but I have invested some thoughts on how to automatically find (or guess) related souls present in the social network. The only possibility I can think of is the one large, untapped resource of science networks: the publication list. It should be possible to construct complete co-authorship networks, based on who authored a paper together which whom. Obviously, this network would contain a number of highly connected nodes, either due to very prolific and collaborative people, or due to people with names like ‘Smith, J’. Nevertheless, this would be an interesting resource that I have not seen implemented so far.

Even if you are not going for the full network, this approach might help the networking sites to find other network members has been your co-author (or more indirectly, a co-author of your co-author). This candidate list could be presented to you and might help you to identify which other network members might be of interest for you. Maybe this approach has been tried before, but everything I have seen so far is either based on geographical proximity of your working place, or on matches between tags and keywords, which the network users might have assigned to themselves.

Just for fun, I have tried to (manually!) analyse my co-authorship relations to a select groups of people: bloggers found in my blogroll. I haven’t tried all of them  – this work can become quite tedious if more than one intermediate co-author is involved. However, for all science bloggers I have tried so far, I was able to find a connection with two or less intermediate authors. Here are some examples:

Direct co-authorship

I found only one example blog called ‘Research Highlights from the Aravind group‘ In the past, I have co-published occasionally together with L.Aravind and people in his group.

One intermediate author

Paweł Szczęsny (Freelancing Science) ↔ Andrei Lupas ↔ me
Roland Krause
(nftb) ↔ Peer Bork ↔me
Jonathan Eisen
(Tree of life) ↔ Eugene Koonin ↔ me
Lars Juhl Jensen
(Buried treasure) ↔ Peer Bork ↔ me

Two intermediate authors

Ian York (Mystery rays) ↔ Alfred Goldberg ↔ Daniel Finley ↔ me
Pedro Beltrao
(Public rambling) ↔ Luis Serrano ↔ Peer Bork ↔ me
Neil Saunders
(what you’re doing..) ↔ Bostjan Kobe ↔ Andrei Kajava ↔ me
Jason Stajich
(fungal genomes) ↔ Ewan Birney ↔ Philipp Bucher ↔ me
Keith Robison
(omics omics) ↔ Emad Alnemri ↔ Vishva Dixit ↔ me

Sometimes, it was easier than expected to find a link to somebody working in a very different area. In other cases, I found a very strong first-degree link to somebody working over years on the same subjects as we were, but always as competitors – no common publication.

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.