Tag Archives: data

Me, Myself and Data – Keren Limor-Waisberg

For Love Data Week (12th-16th February 2018) we are featuring data-related people. Today we talk to Keren Limor-Waisberg , Founder and CEO of the Scientific Literacy Tool. Advocating for open access, citizen science and scientific literacy.

Telling Stories with Data

Let’s start with an easy one. What kind of data do you work with and what do you do with it?

I help people from all walks of life access, understand, and/or use scientific data and literature concerning their scientific topics of interest.

Tell us how you think you can use data to make a difference in your field.

A scientifically literate society is a society in which people are empowered with knowledge they can use to achieve their different goals. As we look at data and understand it, we acquire skills that are essential for both our personal and our societal development.

How do you talk about your data to someone outside of academia?

When I talk about data with someone outside academia, I will take the time to define any new terminology and make sure we understand each other.

Connected Conversations

What data-related challenges do you have to deal with in your research environment?

The main data-related challenge non-academics have is the access to data. Many datasets are simply not accessible to the public. Sadly, some datasets are not even available for other academics.

Once access is achieved, people will struggle with different formats, lack of metadata, different units and/or the lack of tools to process and analyse the data.

How do you think these challenges might be overcome?

The challenge of open data, making data accessible, is currently addressed by many countries. The European commission for communications networks, content and technology (CNECT), for example, are formulating directives that aim to open up and help reuse publicly funded research data in Europe.

Different organisations are now developing tools and packages that will help people work with datasets.

If you were in charge what data-related rule would you introduce?

As a citizen of the World, I advocate for open access, citizen science and scientific literacy so as to promote the understanding that knowledge empowers both individuals and societies to develop and prosper. To make this progress, I think we need to agree on common ethical guidelines – from the right of access to the right of use of publicly funded data.

We are Data

Tell us about your happiest data moment.

My happiest data moment was during my PhD. I calculated the performance of some viral elements using different tests. I had a lot of data and it took a while for the scripts to run. It was nerve-racking. I can still remember sitting there listening to the screeching sounds of the computer. And then one by one I got the results, and they all confirmed my hypothesis. It was great. It was a small piece of scientific knowledge, but I was the first person in the world to know about it.

What advice do you have for someone who is just embarking on a career in your field?

For someone embarking on a career in the field of promoting scientific literacy, I would recommend to be very patient. It is a slow process and there are many obstacles, but at the end it is a very rewarding profession.

What do you think the future of research data looks like?

I think that in the near future we will have much more publicly funded research data accessible. We will see more and more tools emerging to handle this data. More and more people will use this data and tools to make their statements, to dispute ideas, to create products and services, to entertain, or perhaps just to enjoy finding something new.

There is A LOT of data out there about all sorts of things and it is being collected all the time. Does anything frighten you about data?

I think it is important to make sure privacy and identities are protected when data is collected and shared.

Published 15 February 2018
Written by Keren Limor-Waisberg
@TheLiteracyTool, @OpenResCam
Creative Commons License

Me, Myself and Data – Melissa Scarpate

For Love Data Week (12th-16th February 2018) we are featuring data-related people. Today we talk to Melissa Scarpate, Research Associate in the Faculty of Education, PEDAL Centre.

Telling Stories with Data

Let’s start with an easy one. What kind of data do you work with and what do you do with it?

I work with large longitudinal data sets and large cross-cultural data. I really enjoy running latent growth models with the longitudinal data to assess changes over time in my variables of interest (primarily child/adolescent self-regulation and parenting). I use the cross-cultural data to test for differences or similarities in parenting and adolescent developmental outcomes.

Tell us how you think you can use data to make a difference in your field

By using large data sets that either have many time points or have many different countries and cultures represented, I am able to assess relationships between study variables in an impactful way. For instance, if I find that parental monitoring predicts lower levels of adolescent anxiety in 13,000 adolescents across 10 countries then I feel this information has a larger impact on families in a more global way than using a local data set with a small sample size.

How do you talk about your data to someone outside of academia?

Very carefully! I eliminate jargon and speak about the data in broad, general terms rather than getting into details. I would rather the person come away from our conversation with an understanding of what I do, what I have found in my research, and how this impacts society than to impress them with fancy words and statistics.

Connected Conversations

What data-related challenges do you have to deal with in your research environment?

I work in an office with others and it is important for each of us to keep the data confidential and out of sight of one another.

How do you think these challenges might be overcome?

Screen protectors, earplugs in the case of video coding, not printing any data, etc.

If you were in charge what data-related rule would you introduce?

If I were in charge of all data then I would create a rule that all data could be shared in an easy and collaborative way whilst maintaining study participants’ anonymity.

We are Data

Your happiest data moment?

When I finally got my latent growth model to run!

What advice do you have for someone who is just embarking on a career in your field?

Take as many classes in data management, methods, and statistics that you can and get experience in these concepts with researchers that have excellent skills and training in these areas while in graduate school.

What do you think the future of research data looks like?

Open, transparent, simplified with data visualisation techniques, and impactful.

There is A LOT of data out there about all sorts of things and it is being collected all the time. Does anything frighten you about data?

Technological advances such as my phone being able to predict where I am driving before I leave and my Echo Dot/Alexa picking up all of my conversations make me nervous. The benefits, so far, outweigh potential negatives (at least as I have experienced so far).

Published 14 February 2018
Written by Melissa Scarpate
Creative Commons License

Me, Myself and Data – Kirsten Lamb

For Love Data Week (12th-16th February 2018) we are featuring data-related people. Today we talk to Kirsten Lamb, Deputy Librarian, Department of Engineering.

Telling Stories with Data

Let’s start with an easy one. What kind of data do you work with and what do you do with it?

I work with bibliometric data for the most part. I’m not very systematic and tend to be rather intuitive about how I gather and work with it in order to tease out insights. Because I don’t usually have a specific question to get out of the data I tend to just explore to find out what is interesting about a body of literature.

Tell us how you think you can use data to make a difference in your field.

The idea that librarians can help define a research landscape and identify gaps is relatively new. I like to think that by learning to work with bibliometric data I can help researchers better engage with information professionals and give librarians the confidence to use their skills in the research context.

How do you talk about your data to someone outside of academia?

I tell them that by looking at patterns in publishing researchers can see where trends and gaps are, as well as exploiting those patterns to have a larger impact. But I also make sure to point out the fact that basing insights off of metrics is flawed. You have to understand what each metric is and isn’t measuring. None of the metrics are an indication of the quality of an individual piece of research and there’s no replacement for critical analysis to determine that.

Connected Conversations

What data-related challenges do you have to deal with in your research environment?

First, I don’t have a background in statistics or programming, so there’s a limit to how complex an analysis I can do. Second, the metrics themselves are limited, so communicating the value of the information embedded in the data is a challenge. Third, a lot of bibliometric software is based on use of a particular database’s API so it’s difficult to combine results from different databases to give a broader picture.

How do you think these challenges might be overcome?

All of them would be helped if I could learn to programme! Collaborating with someone who knew how to do the actual analysis bit would be great because that way I could provide the insights and figure out exactly what I wanted to measure and they could make it happen.

If you were in charge what data-related rule would you introduce?

People who write software that does data analysis would make it more user-friendly for people who don’t know how to code. Basically there’d be a WYSIWYG/Microsoft Excel-style programme for doing bibliometric analyses and generating beautiful graphics based on it that didn’t require any coding.

We are Data

Tell us about your happiest data moment.

I was pleased when I discovered that Web of Science does a lot of the analysis I wanted to do but thought I could only do if I had InCites or similar. As much as I like knowing what’s being measured and having an intimate knowledge of the data, sometimes it’s nice to just be able to click a few buttons and get a nice graph!

What advice do you have for someone who is just embarking on a career in your field?

I’d want to tell them that they don’t need to be a maths or programming whiz to do it, but I’m not yet convinced of that myself! I think the main thing is not to think of some metrics as good and some as bad. They’re all just tools and you need to know what they do in order to pick which ones you want to use. Always look under the bonnet!

What do you think the future of research data looks like?

While I’d love for it to be open, interoperable, integrated and well-indexed, I’m not sure that’s going to happen any time soon. Each time someone develops a new standard to rule them all, it just gets added to the growing list of standard

There is A LOT of data out there about all sorts of things and it is being collected all the time. Does anything frighten you about data?

Yes. I don’t think that as a species we’ve really figured out what it means to live in a data-rich ecosystem, and I mean that both metaphorically and literally. The rate at which data is growing is currently unsustainable from the perspectives of preservation, legislation, interpretation and energy use. While I’m definitely uncomfortable with how much certain companies know about me, I’m more concerned with the fact that collecting and managing that amount of data about everyone and everything is bad for the planet and we haven’t figured out how to make sure it’s safe. We need legislation and curation to catch up with technology instead of lagging about a decade behind.

Published 13 February 2018
Written by Kirsten Lamb
@library_sphinx
Creative Commons License

Me, Myself and Data – Dr Sudhakaran Prabakaran

For Love Data Week (12th-16th February 2018) we are featuring data-related people. Today we talk to Dr Sudhakaran Prabakaran, Lecturer/ Group leader, Department of Genetics.

Telling Stories with Data

Let’s start with an easy one. What kind of data do you work with and what do you do with it?

We use population level sequencing data sets from TCGA, mutation datasets from COSMIC, ClinVar, HGMD, curated database from other labs. We use discarded datasets, negative datasets, already published datasets, anything and everything. We develop and use structural genomics, mathematical modelling and machine learning tools to analyse mutations that map to noncoding regions of the human genome.

Tell us how you think you can use data to make a difference in your field.

We live on these datasets. Biological data is going to exceed 2.5 Exabytes in the next two years, and the bottleneck is the analysis of these datasets. Our job is to find patterns in these datasets. Rare variants and driver mutations become significant and identifiable only when we look for them in a population context.

How do you talk about your data to someone outside of academia?

​For us it is not difficult. The datasets we are using are generated and curated by governmental and international consortiums. They have done the bulk of publicity. For example, the TCGA dataset has all kinds of data from thousands of cancer patients and is curated by the NIH. The power of this data is for all to see. I just say we try to aid in cancer diagnosis by crawling through these datasets to find patterns.

Connected Conversations

What data-related challenges do you have to deal with in your research environment?

We are happy with the publicly available datasets. Our problem starts with the datasets we collect. How to store, analyse, and make it available for everyone to use are the questions we are trying to answer all the time.

How do you think these challenges might be overcome?

I am an ardent proponent of cloud-storage and computation. I believe that is the future. I am also aware that some countries are concerned with data migration outside their geographical boundaries.

If you were in charge what data-related rule would you introduce?

I am not going to make up anything new. Past US Presidents have made laws like any data generated with public funds should be made available.

Governmental organisations should demystify cloud based storage and computation processes. People are unduly worried. People are giving away more personal data wilfully on Facebook, Twitter, Instagram than through genome sequences collected by public consortiums.

Tell us about your happiest data moment.

It is not one moment, it is a series of moments up until now. I can run a viable research program with no startup money or funds just by scavenging through publicly available datasets.

What advice do you have for someone who is just embarking on a career in your field?

Learn machine-learning and cloud-computing

What do you think the future of research data looks like?

Lots of data analysis than data generation

There is A LOT of data out there about all sorts of things and it is being collected all the time. Does anything frighten you about data?

I am in fact excited. I believe we need to train more data scientists. We are in good times. Data is becoming truly democratic!

Published 12 February 2018
Written by Dr Sudhakaran Prabakaran
@wk181
Creative Commons License

Sustaining long-term access to open research resources – a university library perspective

Originally published 11 September 2017, Written by Dave Gerrard

In the third in a series of three blog posts, Dave Gerrard, a Technical Specialist Fellow from the Polonsky-Foundation-funded Digital Preservation at Oxford and Cambridge project, describes how he thinks university libraries might contribute to ensuring access to Open Research for the longer-term.  The series began with Open Resources, who should pay, and continued with Sustaining open research resources – a funder perspective.

Blog post in a nutshell

This blog post works from the position that the user-bases for Open Research repositories in specific scientific domains are often very different to those of institutional repositories managed by university libraries.

It discusses how in the digital era we could deal with the differences between those user-bases more effectively. The upshot might be an approach to the management of Open Research that requires both types of repository to work alongside each other, with differing responsibilities, at least while the Open Research in question is still active.

And, while this proposed method of working together wouldn’t clarify ‘who is going to pay’ entirely, it at least clarifies who might be responsible for finding funding for each aspect of the task of maintaining access in the long-term.

Designating a repository’s user community for the long-term

Let’s start with some definitions. One of the core models in Digital Preservation, the International Standard Open Archival Information System Reference Model (or OAIS) defines ‘the long term’ as:

“A period of time long enough for there to be concern about the impacts of changing technologies, including support for new media and data formats, and of a changing Designated Community, on the information being held in an OAIS. This period extends into the indefinite future.”

This leads us to two further important concepts defined by the OAIS:

Designated Communities” are an identified group of potential Consumers who should be able to understand a particular set of information”, i.e. the set of information collected by the ‘archival information system’.

A “Representation Information Network” is the tool that allows the communities to explore the metadata which describes the core information collected. This metadata will consist of:

  • descriptions of the data contained in the repository
  • metadata about the software used to work with that data,
  • the formats in which the data are stored and related to each other, and so forth.

In the example of the Virtual Fly Brain Platform repository discussed in the first post in this series, the Designated Community appears to be: “… neurobiologists [who want] to explore the detailed neuroanatomy, neuron connectivity and gene expression of Drosophila melanogaster.” And one of the key pieces of Representation Information, namely “how everything in the repository relates to everything else”, is based upon a complex ontology of fly anatomy.

It is easy to conclude, therefore, that you really do need to be a neurobiologist to use the repository: it is fundamentally, deeply and unashamedly confusing to anyone else that might try to use it.

Tending towards a general audience

The concept of Designated Communities is one that, in my opinion, the OAIS Reference Model never adequately gets to grips with. For instance, the OAIS Model suggests including explanatory information in specialist repositories to make the content understandable to the general community.

Long term access within this definition thus implies designing repositories for Designated Communities consisting of what my co-Polonsky-Fellow Lee Pretlove describes as: “all of humanity, plus robots”. The deluge of additional information that would need to be added to support this totally general resource would render it unusable; to aim at everybody is effectively aiming at nobody. And, crucially, “nobody” is precisely who is most likely to fund a “specialist repository for everyone”, too.

History provides a solution

One way out of this impasse is to think about currently existing repositories of scientific information from more than 100 years ago. We maintain a fine example at Cambridge: The Darwin Correspondence Project, though it can’t be compared directly to Virtual Fly Brain. The former doesn’t contain specialist scientific information like that held by the latter – it holds letters, notebooks, diary entries etc – ‘personal papers’ in other words. These types of materials are what university archives tend to collect.

Repositories like Darwin Correspondence don’t have “all of humanity, plus robots” Designated Communities, either. They’re aimed at historians of science, and those researching the time period when the science was conducted. Such communities tend more towards the general than ‘neurobiologists’, but are still specialised enough to enable production and management of workable, usable, logical archives.

We don’t have to wait for the professor to die any more

So we have two quite different types of repository. There’s the ‘ultra-specialised’ Open Research repository for the Designated Community of researchers in the related domain, and then there’s the more general institutional ‘special collection’ repository containing materials that provide context to the science, such as correspondence between scientists, notebooks (which are becoming fully electronic), and rough ‘back of the envelope’ ideas. Sitting somewhere between the two are publications – the specialist repository might host early drafts and work in progress, while the institutional repository contains finished, publish work. And the institutional repository might also collect enough data to support these publications, too, like our own Apollo Repository does.

The way digital disrupts this relationship is quite simple: a scientist needs access to her ‘personal papers’ while she’s still working, so, in the old days (i.e. more than 25 years ago) the archive couldn’t take these while she was still active, and would often have to wait for the professor to retire, or even die, before such items could be donated. However, now everything is digital, the prof can both keep her “papers” locally and deposit them at the same time. The library special collection doesn’t need to wait for the professor to die to get their hands on the context of her work. Or indeed, wait for her to become a professor.

Key issues this disruption raises

If we accept that specialist Open Research repositories are where researchers carry out their work, that the institutional repository role is to collect contextual material to help us understand that work further down the line, then what questions does this raise about how those managing these repositories might work together?

How will the relationship between archivists and researchers change?

The move to digital methods of working will change the relationships between scientists and archivists.  Institutional repository staff will become increasingly obliged to forge relationships with scientists earlier in their careers. Of course, the archivists will need to work out which current research activity is likely to resonate most in future. Collection policies might have to be more closely in step with funding trends, for instance? Perhaps the university archivist of the digital future might spend a little more time hanging round the research office?

How will scientists’ behaviour have to change?

A further outcome of being able to donate digitally is that scientists become more responsible for managing their personal digital materials well, so that it’s easier to donate them as they go along. This has been well highlighted by another of the Polonsky Fellows, Sarah Mason at the Bodleian Libraries, who has delivered personal digital archiving training to staff at Oxford, in part based on advice from the Digital Preservation Coalition. The good news here is that such behaviour actually helps people keep their ongoing work neat and tidy, too.

How can we tell when the switch between Designated Communities occurs?

Is it the case that there is a ‘switch-over’ between the two types of Designated Community described above? Does the ‘research lifecycle’ actually include a phase where the active science in a particular domain starts to die down, but the historical interest in that domain starts to increase? I expect that this might be the case, even though it’s not in any of the lifecycle models I’ve seen, which mostly seem to model research as either continuing on a level perpetually, or stopping instantly. But such a phase is likely to vary greatly even between quite closely-related scientific domains. Variables such as the methods and technologies used to conduct the science, what impact the particular scientific domain has upon the public, to what degree theories within the domain conflict, indeed a plethora of factors, are likely to influence the answer.

How might two archives working side-by-side help manage digital obsolescence?

Not having access to the kit needed to work with scientific data in future is one of the biggest threats to genuine ‘long-term’ access to Open Research, but one that I think it really does fall to the university to mitigate. Active scientists using a dedicated, domain specific repository are by default going to be able to deal with the material in that repository: if one team deposits some material that others don’t have the technology to use, then they will as a matter of course sort that out amongst themselves at the time, and they shouldn’t have to concern themselves with what people will do 100 years later.

However, university repositories do have more of a responsibility to history, and a daunting responsibility it is. There is some good news here, though… For a start, universities have a good deal of purchasing power they can bring to bear upon equipment vendors, in order to insist, for example, that they produce hardware and software that creates data in formats that can be preserved easily, and to grant software licenses in perpetuity for preservation purposes.

What’s more fundamental, though, is that the very contextual materials I’ve argued that university special collections should be collecting from scientists ‘as they go along’ are the precise materials science historians of the future will use to work out how to use such “ancient” technology.

Who pays?

The final, but perhaps most pressing question, is ‘who pays for all this’? Well – I believe that managing long-term access to Open Research in two active repositories working together, with two distinct Designated Communities, at least might makes things a little clearer. Funding specialist Open Research repositories should be the responsibility of funders in that domain, but they shouldn’t have to worry about long-term access to those resources. As long as the science is active enough that it’s getting funded, then a proportion of that funding should go to the repositories that science needs to support it. The exact proportion should depend upon the value the repository brings – might be calculated using factors such as how much the repository is used, how much time using it saves, what researchers’ time is worth, how many Research Excellence Framework brownie points (or similar) come about as a result of collaborations enabled by that repository, etc etc.

On the other hand, I believe that university / institutional repositories need to find quite separate funding for their archivists to start building relationships with those same scientists, and working with them to both collect the context surrounding their science as they go along, and prepare for the time when the specialist repository needs to be mothballed. With such contextual materials in place, there don’t seem to be too many insurmountable technical reasons why, when it’s acknowledged that the “switch from one Designated Community to another” has reached the requisite tipping point, the university / institutional repository couldn’t archive the whole of the specialist research repository, describe it sensibly using the contextual material they have collected from the relevant scientists as they’ve gone along, and then store it cheaply on a low-energy medium (i.e. tape, currently). It would then be “available” to those science historians that really wanted to have a go at understanding it in future, based on what they could piece together about it from all the contextual information held by the university in a more immediately accessible state.

Hence the earlier the institutional repository can start forging relationships with researchers, the better. But it’s something for the institutional archive to worry about, and get the funding for, not the researcher.

Originally published 11 September 2017
Written by Dave Gerrard

Creative Commons License

Open at scale: sharing images in the Open Research Pilot

Originally published 8 May 2017 Written by Rosie Higman and Dr Ben Steventon
Dr Ben Steventon is one of the participants in the 
Open Research Pilot. He is working with the Office of Scholarly Communication to make his research process more open and here reports on some of the major challenges he perceives at the beginning of the project.

The Steventon Group is a new group established last year which looks at embryonic development, in particular focusing on the zebrafish. To investigate problems in this area the group uses time-lapse imaging and tracks cells in 3D visualisations which presents many challenges when it comes to data sharing, which they hope to address through the Wellcome Trust Open Research Project. Whilst the difficulties that this group are facing are specific to a particular type of research, they highlight some common challenges across open research: sharing large files, dealing with proprietary software and joining up the different outputs of a group.

Sharing imaging data

The data created by time-lapse imaging and cell tracking is frequently on a scale that presents a technical, as well as financial, challenge. The raw data consists of several terabytes of film which is then compressed for analysis into 500GB files. These compressed files are of a high enough quality that they can be used for analysis but they are still not small enough that they can be easily shared. In addition the group also generates spreadsheets of tracking data, which can be easily shared but are meaningless without the original imaging files and specific software to allow the two pieces of data to be connected. One solution which we are considering is the Image Data Resource, which is working to make imaging datasets in the life sciences, which have not previously been shareable due to their size, available to the scientific community to re-use.

Making it usable

The software used in this type of research is a major barrier to making the group’s work reproducible. The Imaris software the group uses costs thousands of pounds so anything shared in their proprietary formats are only accessible to an extremely small group of researchers at wealthier institutions, which is in direct opposition to the principles of Open Research. It is possible to use Fiji, an open source alternative, to recreate tracking with the imaging files and tracking spreadsheets; however, the data annotation originally performed in Imaris will be lost when the images are not saved in the proprietary formats.

An additional problem in such analyses is the sharing of protocols that detail the methodologies applied, from the preparation of the samples all the way through data generation and analysis. This is a common problem with standard peer-review journals that are often limited in the space available for the description of methods. The group are exploring new ways to communicate their research protocols and have created an article for the Journal of Visualised Experiments, but these are time consuming to create and so are not always possible. Open peer-review platforms potentially offer a solution to sharing detailed protocols in a more rapid manner, as do specialist platforms such as Wellcome Open Research and Protocols.io.

Increasing efficiency by increasing openness

Whilst the file size and proprietary software in this type of research presents some barriers to sharing, there are also opportunities through sharing to improve practice across the community. Currently there are several different software packages being used for visualisation and tracking. Therefore, sharing more imaging data would allow groups to try out different types of images on different tools and make better purchasing decisions with their grant money. Furthermore, there is a great frustration in this area that lots of people are working on different algorithms for different datasets, so greater sharing of these algorithms could reduce the amount of time wasted creating algorithms when it might be possible to adapt a pre-existing one.

Shifting models of scholarly communication

As we move towards a model of greater openness, research groups are facing a new difficulty in working out how best to present their myriad outputs. The Steventon group intends to publish data (in some form), protocols and a preprint at the same time as submitting their papers to a traditional journal. This will make their work more reproducible, and it also allows researchers who are interested in different aspects of their work to access the bits that interest them. These outputs will link to one another, through citations, but this relies on close reading of the different outputs and checking references. The Steventon group would like to make the links between the different aspects of their work more obvious and browsable, so the context is clear to anyone interest in the lab’s work. As the research of the group is so visual it would be appropriate to represent the different aspects of their work in a more appealing form than a list of links.
The Steventon lab is attempting to link and contextualise their work through their website, and it is possible to cross-reference resources in many repositories (including Cambridge’s Apollo), but they would like there to be a more sustainable solution. They work in areas with crossovers to other disciplines – some people may be interested in their methodologies, others the particular species they work on, and others still the particular developmental processes they are researching. There are opportunities here for openness to increase the discoverability of interdisciplinary research and we will be exploring this, as well as the issues around sharing images and proprietary software, as part of the Open Research Pilot.

Originally published 8 May 2017
Written by Rosie Higman and Dr Ben Steventon

Creative Commons License