Home  »  ICIC  »  Programme  »  Monday 23 Oct

Monday 23 Oct

Starts at 09:00

09:00 - 12:00

Chair: Wendy Warr, Wendy Warr & Associates, UK

Welcome

The Next Era: Deep Learning for Biomedical Research

Deep learning is hot, making waves, delivering results, and is somewhat of a buzzword today. There is a desire to apply deep learning to anything that is digital. Unlike the brain, these artificial neural networks have a very strict predefined structure. The brain is made up of neurons that talk to each other via electrical and chemical signals. We do not differentiate between these two types of signals in artificial neural networks. They are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Another buzzword that was used for the last few years across all industries is “big data”.  In biomedical and health sciences, both unstructured and structured information constitute "big data". On the one hand deep learning needs lot of data whereas “big data" has value only when it generates actionable insight. Given this, these two areas are destined to be married. The couple is made for each other. The time is ripe now for a synergistic association that will benefit the pharmaceutical companies. It may be only a short time before we have vice presidents of machine learning or deep learning in pharmaceutical and biotechnology companies. This presentation will review the prominent deep learning methods and discuss these techniques for their usefulness in biomedical and health informatics.

 

AI (Artificial Intelligence) and its use in patent drafting and searching

A presentation on the development of AI in the IP field. Will patents in future be (partly) drafted by the c

omputer? And how about retrieving knowledge from patents using AI? Will we be able to find sweet spots in technology by technology? This presentation wants to deepen the discussion on this topic and our thoughts on how open we are for these developments.

 

10:30 - 11:00

Exhibition and Networking Break

Freeware and public databases: Towards a Wiki Drug Discovery?

How much information does the scientists need to design new potential drugs?

A thorough overview of public scientific information sources (open access) and methods to collect, process, analyse and visualize this information will be presented. A direct application of such free available information in conjunction with freeware will be described in relation with the efforts of the scientific community to find effective medicines for the ZIKA virus.

 

12:00 - 14:00

Lunch, Exhibition and Networking

14:00 - 15:40

Chair: René de Planque, Germany

How to effectively monitor Technological Developments in IP

Modern, cutting-edge developments are not reflected in current patent classification systems, which tend to catalogue established technologies. Identifying patent portfolios in such emerging fields proves a challenging job for patent and technology experts.

Going beyond the mere identification of new IP, additional value may be added using a regional geographic weighting combined with consolidated portfolio owner information. 

Effective monitoring of the technological field is achieved by training active-learning search engines to hunt for highly relevant patent documents, thus keeping IP portfolios for emerging technologies up to date.  The system we have developed permits extremely accurate updates with drastically reduced noise and with low workload which have proven to be invaluable in a world of drastically increasing data blur.

 

Facilitating Insights through Patent Analysis Education

Patent statistics tools allow the analysis of the robust and unique information embedded in patent documents and enable broad and deep insights/exploration of various R&D fields by identifying research trends, leading markets, prominent innovators/assignees, and many more.

In recent years we are witnessing a growing interest in patent analytics methodologies and tools, both in the academic and the business sectors, as well as among decision makers. This growing interest creates a demand for a new profession - Patent Analytics Specialists.

How can this requirement be fulfilled? What are the skills to become a patent analytics specialists and can the skills be taught? What is the required training process? Should this training process be regulated?

Obviously, in order to become a patent analytics specialist a deep familiarity with patent information is required but what are the advantages of approaching the topic from scratch? What lessons can be learned about patent analytics methodology from the teaching process itself? What are the best practices in patent analytics teaching?

The proposed presentation will present the conclusions of the accumulated experience over six years of patent analysis for both academic research and corporate organizations alongside 14 years of experience in teaching information retrieval and patent searching methodologies. This experience has led to the development of a unique teaching method to train students from different backgrounds with no familiarity with patent searches and patent landscaping to perform a complete analysis of patents.

The presentation will explore the benefits of collaborative work in patent searching and analysis. It will also explore the tools required to create patent-based market analysis, the contribution of non-professional patent searchers to the process of patent analysis (i.e. creativity, originality, innovative thinking, and independence), present a comparative review of different types of patent databases and of advanced techniques to adjust the analysis to the target audience. 

 

Patent statistics tools allow the analysis of the robust and unique information embedded in patent documents and enable broad and deep insights/exploration of various R&D fields by identifying research trends, leading markets, prominent innovators/assignees, and many more. In recent years we are witnessing a growing interest in patent analytics methodologies and tools, both in the academic and the business sectors, as well as among decision makers. This growing interest creates a demand for a new profession - Patent Analytics Specialists. How can this requirement be fulfilled? What are the skills to become a patent analytics specialists and can the skills be taught? What is the required training process? Should this training process be regulated? Obviously, in order to become a patent analytics specialist a deep familiarity with patent information is required but what are the advantages of approaching the topic from scratch? What lessons can be learned about patent analytics methodology from the teaching process itself? What are the best practices in patent analytics teaching? The proposed presentation will present the conclusions of the accumulated experience over six years of patent analysis for both academic research and corporate organizations alongside 14 years of experience in teaching information retrieval and patent searching methodologies. This experience has led to the development of a unique teaching method to train students from different backgrounds with no familiarity with patent searches and patent landscaping to perform a complete analysis of patents. The presentation will explore the benefits of collaborative work in patent searching and analysis. It will also explore the tools required to create patent-based market analysis, the contribution of non-professional patent searchers to the process of patent analysis (i.e. creativity, originality, innovative thinking, and independence), present a comparative review of different types of patent databases and of advanced techniques to adjust the analysis to the target audience.

15:40 - 16:10

Exhibition and Networking Break

16:10 - 17:40

Chair: Aalt van de Kuilen, Patent Information Services, Netherlands

Building a Linked Data Knowledge Graph for the Scholarly Publishing Domain - The Springer Nature SciGraph

Springer Nature SciGraph, the new Linked Open Data platform aggregating data sources from Springer Nature and key partners from the scholarly domain. The Linked Open Data platform will initially collate information from across the research landscape, such as funders, research projects, conferences, affiliations and publications. Additional data, such as citations, patents, clinical trials and usage numbers will follow over time. This high quality data from trusted and reliable sources provides a rich semantic description of how information is related, as well as enabling innovative visualizations of the scholarly domain.

 

Looking at the gift horse: pros and cons of over 20 million patent-extracted structures in PubChem

As of August 2017, the major automated patent chemistry extractions (in ascending size, NextMove, SCRIPDB, IBM and SureChEMBL) are included submitters for 21.5 million CIDs from the PubChem total of 93.8. The following aspects will be expanded in this presentation, starting with advantages; a) while the relative coverage between open and commercial sources is difficult to determine (PMID 26457120) it is clear that the majority of patent-exemplified structures of medicinal chemistry interest (i.e. from C07 plus A61)  are now in PubChem b) this allows most first-filings of lead series and clinical candidates to be tracked d) the PubChem tool box has query, analysis, clustering and linking features difficult to match in commercial sources, e) many structures can be associated with bioactivity data  f) connections between manually curated papers and patents can be made via the 0.48 million CID intersects with ChEMBL.  However, looking more closely also indicates disadvantages; a) extraction coverage is compromised by dense image tables and poor OCR quality of WO documents, b)  SureChEMBL is the only major open pipeline continuously running in situ but has a PubChem updating lag, c) automated extraction generates structural “noise” that degrades chemistry quality d) PubChem patent document metadata indexing is patchy (although better for SureChEMBL in situ) d) nothing in the records indicateas IP status, e) continual re-extraction of common chemistry results in over-mapping (e.g. 126,949 patents for  aspirin and 14,294 for atorvastatin), f) authentic compounds are contaminated with spurious mixtures and never-made virtuals, including 1000s of deuterated drugs g) linking between assay data and targets is still a manual exercise. However, all things considered the PubChem patent “big bang” presents users with the best of both worlds (PMID 26194581). Academics or smaller enterprises who cannot afford commercial solutions can now patent mine extensively. For those who have such subscriptions, PubChem has become an essential adjunct/complementary source for the analysis of patent chemistry and associated bio entities such as diseases and drug targets.   

Dealing with Patent Families in FTO Searches, Portfolio Analysis and Patent Statistics

Patent families may introduce an additional layer of complexity to many patent searches. FTO searches and portfolio analyses are in particular affected by patent families. The presentation discusses issues caused by patent families and looks at different approached for dealing with them: approaches to patent family search by various search engines, selection of the best family member based on the scope of the search, selection considering the language skills of the patent researcher, dealing with complex families, family legal status and reporting. The selection of patent family members for FTO searches and patent statistics will be demonstrated in a brief case study.

19:30 - 22:00

Conference Dinner @ Kulturbrauerei / Leyergasse 6 / 69117 Heidelberg