Access to expertise on how to improve experimental workflows. Expertise spans the influence of experimental design and data acquisition on data analysis and data quality both in omics and nanosafety broadly, and the importance of experimental design for ensuring compatibility of data generated with the various processing tools including the predictive nanotoxicology and risk assessment tools offered across the NanoCommons TA partners.
Our Services
For Users planning to establish nano-ecotox laboratories or large-scale data generation experiments. Support in the design of optimised workflows and the underpinning sample and data management needs, to maximise the suitability of the dataset for implementation into the community database and to ensure that data is suitable for use in subsequent modelling tools. Services can be applied to existing experimental set-up(s) or can support experimental facility design and subsequent access to data processing services.
Site visits, either to established facilities (as a reference in terms of layout, equipment needs, workflow, etc.) or to new facilities (in order to identify potential pitfalls). Web-based information exchange forums. Organisation of round-robin tests to evaluate inter-laboratory precision. Support on definition of testing workflows, particularly when parts of the process need to be externalised, consideration of what must be sequential versus what can be done in parallel, shipping, sample stability etc.
Integration of knowledge and interlinking of datasets via use of agreed ontologies. Implementation of existing ontologies to datasets. Development of additional ontological terms to address research data. Approaches such as the DC Summit Knowledge Integrating Application facilitate this.
Large data-sets support through standardised processing and analysis tools, focusing on import of the data and its organisation in a manner that makes is accessible and re-usable, through for example the DC Data Explorer Application. Integration and processing of existing data sets. Study design support to minimise the post-study effort needed for datasets to be integrated into the NanoCommons knowledge base.
For Users that have large data-sets, or are planning large-scale data generation experiments, CEH will offer Users access to its Large data-sets services for open, findable and re-usable data and integration into the NanoCommons knowledge hub. Application to existing datasets or experimental design support and subsequent access to data processing services.
Large data-sets support through standardised processing and analysis tools, especially for big data from omics or high-content screening experiments (e.g, ToxFlow, RRegrs). Application to existing datasets and/or experimental design support and subsequent access to data processing services.
Large data-sets support in curation and quality assurance approaches and access to bioinformatics tools. Application to existing datasets or experimental design support and subsequent access to data processing services and assessment of data completeness and re-usability.
Large data-sets support in maximising the suitability of the dataset for QSARs, including computational approaches for quantum mechanical and image analysis for descriptor calculation. Application to existing datasets or experimental design support and subsequent access to data processing services and assessment of data completeness and re-usability.
A range of tools to support the development of ontologies such as a Jenkins installation, along with the UM expertise to continue development of the ontology for broader aspects of nanosafety assessment.
Tools for data visualisation and risk analysis, such as the DC Risk Assessment Application. Application to existing datasets or experimental design support and subsequent access to data processing services and assessment of data completeness and re-usability. Full technical for the generation of high quality joint publications.
Implementation to Integrated Testing Strategies (ITS). Co-development of ITS specific examples for nanosafety via mini-projects, whereby the DC scientist and computational experts would utilise submitted datasets to develop an ITS for the specific set of end-points based on the minimal dataset that needs to be generated experimentally for hazard and risk assessments. Access to DC’s state-of-the art tools for predictive nanotoxicology can be applied on existing datasets or following support in experimental design initially to maximise the output from the predictive approaches.
Computational services access to data visualisation tools (e.g. embedded into Toxflow) for the generation of high quality joint publications.
Access to a range of computational tools for predictive NanoToxicology, including Jaqpot Quattro for calculation of descriptors and QSARs and the image analysis platform. Access to NTUA’s state-of-the art tools for predictive nanotoxicology can be applied on existing datasets or following experimental design support and subsequent access to data processing services to maximise the output from the predictive approaches.
Computational services access to data visualisation tools (e.g. embedded into KNIME) for the generation of high quality joint publications.
Access to a range of computational tools for predictive NanoToxicology, including the current Enalos KNIME nodes and custom developed nodes for User applications, read-across approaches. Design-of-experiments tool to support Users in nanomaterials design to avoid key features linked to hazard (“safety by design”). Application on existing datasets or following experimental design support and subsequent access to data processing services to maximise the output from the predictive approaches.
Computational approaches including bioinformatics data visualisation tools and approaches for categorisation and grouping of nanomaterials, utilising approaches that are aligned with those used by regulators.
Access to risk assessment workflows where results need to be visualised and reported. Application to existing datasets or experimental design support and subsequent access to data risk assessment workflows.
Access and supporting expertise to develop AOPs utilising a range of open source and online approaches including WikiPathways, ArrayAnalysis, BridgeDb. Virtual access via development of guidance and online supports.
Access to environmental fate model catalogue and framework, along with the technical support needed to run the models. Application to existing datasets or experimental design support and subsequent access to data processing services.
Access to Biomax NanoCommons Knowledge Base for data storage and tools implementation.