notes on "artificial intelligence in bioscience symposium 2017"

101
Artificial Intelligence in Bioscience Symposium https://www.bioscience.ai/ |#bioai2017 | Sept 14, 2017 | The British Library, London Petteri Teikari, PhD http://petteri-teikari.com/ | https://www.linkedin.com/in/petteriteikari/ Version “Sat 23 September 2017“ Notes on

Upload: petteri-teikari-phd

Post on 23-Jan-2018

399 views

Category:

Health & Medicine


7 download

TRANSCRIPT

Page 1: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Artificial Intelligence in Bioscience Symposiumhttps://www.bioscience.ai/ |#bioai2017 | Sept 14, 2017 | The British Library, London

Petteri Teikari, PhDhttp://petteri-teikari.com/ | https://www.linkedin.com/in/petteriteikari/ Version “Sat 23 September 2017“

Notes on

Page 2: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

What was saidBrief overview what was actually said at the conference

What could have been saidIf there would have been more time and interest for in-depth presentations, comments and literature reviews have been gathered around the discussed topics

How analyzedGiven my own background in engineering / visual neurosciences / deep learning, the drug discovery is not analyzed in-depth. The aim was rather to find analogies in more electro-optical medicineand data mining in general

How structured In a dense “teaser”-fashion trying to briefly show the directions that interested readers can move if they are interested in investing their own time and learning more

Structure of the presentationPersonal subjective experience of the conference: https://www.bioscience.ai/ #bioai2017

Page 3: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Improving drug target selection

Page 4: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Improving drug target selection #1

Torkamani et al. High-Definition MedicineCell Volume 170, Issue 5, 24 August 2017, Pages 828-843https://doi.org/10.1016/j.cell.2017.08.007

The Economics of Reproducibility in Preclinical Research Leonard P. Freedman , Iain M. Cockburn, Timothy S. SimcoePLOS Biology June 9 2015, https://doi.org/10.1371/journal.pbio.1002165

“Estimated US preclinical research spend and categories of errors that contribute to irreproducibility.” Very costly the research the research that is irreproducible.

Target discovery and genetics evidenceCook, David, et al. "Lessons learned from the fate of AstraZeneca's drug pipeline: a five-dimensional framework." Nature reviews. Drug discovery 13.6 (2014): 419. http://dx.doi.org/10.1038/nrd4309

Nelson, Matthew R., et al. "The support of human genetic evidence for approved drug indications." Nature genetics 47.8 (2015): 856. http://dx.doi.org/10.1038/ng.3314

“We estimate that selecting genetically supported targets could double the success rate in clinical development. Therefore, using the growing wealth of human genetic data to select the best targets and indications should have a measurable impact on the successful development of new drugs.”

Page 5: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Improving drug target selection #2

http://www.targetvalidation.org/https://www.opentargets.org/

Koscielny, Gautier, et al. "Open Targets: a platform for therapeutic target identification and validation." Nucleic acids research 45.D1 (2016): D985-D994. https://dx.doi.org/10.1093/nar/gkw1055 - Cited by 14

Kafkas, Şenay, Ian Dunham, and Johanna McEntyre. "Literature evidence in open targets-a target validation platform." Journal of biomedical semantics8.1 (2017): 20.https://doi.org/10.1186/s13326-017-0131-3

Page 6: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Improving drug target selection #3

Ferrero, Enrico, Ian Dunham, and Philippe Sanseau. "In silico prediction of novel therapeutic targets using gene–disease association data." Journal of translational medicine 15.1 (2017): 182.https://doi.org/10.1186/s12967-017-1285-6

Feature importance and classification criteria. a Feature importance according to two independent feature selection methods (left to right): Chi squared test and information gain. b Decision tree classification criteria: colours represent predicted outcome (purple non-target, green target). In each node, numbers represent (from top to bottom): outcome (0: non-target, 1: target), number of observations in node per class (left non-target, righttarget), percentage of observations in node

Semi-supervised learning to predict novel targetsCreate numeric features by taking mean score across all diseases:

Genetic associations (germline) Somatic mutations Significant gene expression changes Disease-relevant phenotype in animal model Pathway-level evidence

Nested cross-validation and bagging for tuning and model selection

PU learning (Partially Supervised Classification, Learning from Positive and Unlabeled Examples)

https://www.cs.uic.edu/~liub/NSF/PSC-IIS-0307239.html

du Plessis et al. (2014)

In other words, same as more commonly nowadays used term semi-supervised learning

Page 7: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Improving drug target selection #4

Literature text mining validation of predictions using SciBite: https://www.scibite.com/

https://www.scibite.com/case-studies/case-study/biomarker-discovery-in-biomedical-literature/

→ https://doi.org/10.1186/2043-9113-4-13

Page 8: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Comments and further literature background: Reproducibility #1

Poor reproducibility of studies is detrimental for the progress of science

Need to develop R-index or make it actually more popular as there are already initiatives for that

http://verumanalytics.io/Sean Rife, Josh Nicholson, Yuri Lazebnik, Peter Grabitz

We propose to solve the credibility problem by assigning each scientific report a simple measure of veracity, the R-factor, with R standing for reputation, reproducibility, responsibility, and robustness

http://blogs.discovermagazine.com/neuroskeptic/2017/08/21/r-factor-fix-science/#.Wb0mVZ_6xhH

Science with no fiction: measuring the veracity of scientific reports by citation analysisPeter Grabitz, Yuri Lazebnik, Joshua Nicholson, Sean Rifehttp://www.biorxiv.org/content/early/2017/08/09/172940https://doi.org/10.1101/172940

Page 9: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Comments and further literature background: Reproducibility #2 with Blockchain

Towards a scientific blockchain framework for reproducible data analysisC. Furlanello, M. De Domenico, G. Jurman, N. Bussola (Submitted on 20 Jul 2017)https://arxiv.org/abs/1707.06552

Our mechanism builds a trustless ecosystem of researchers, funding bodies and publishers cooperating to guarantee digital and permanent access to information and reproducible results. As a natural byproduct, a procedure to quantify scientists' and institutions' reputation for ranking purposes is obtained.

Decentralized electronic health records (EHR, EMR) reduce the power of the crony capitalist EPICs of the world.

→ more efficient and cost-effective systems → data mining for data-driven medicine gets a lot easier

AI in Ophthalmology | Startup LandscapePetteri Teikari, PhDPublished on Aug 19, 2016https://www.slideshare.net/PetteriTeikariPhD/artificial-intelligence-in-ophthalmology

Page 10: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Comments and further literature background: Reproducibility #3 with Blockchain

Who Will Build the Health-Care Blockchain? Decentralized databases promise to revolutionize medical records, but not until the health-care industry buys in to the idea and gets to work.By Mike Orcutt, September 15, 2017https://www.technologyreview.com/s/608821/who-will-build-the-health-care-blockchain/

There are 26 different electronic medical records systems used in the city of Boston, each with its own language for representing and sharing data. Critical information is often scattered across multiple facilities, and sometimes it isn’t accessible when it is needed most—a situation that plays out every day around the U.S., costing money and sometimes even lives. But it’s also a problem that looks tailor-made for a blockchain to solve, says John Halamka, chief information officer at Beth Israel Deaconess Medical Center in Boston.

Emily Vaughn, head of accounts at Gem, a startup that helps companies adopt blockchain technology, says that’s only just starting to be worked out. “There may be specific rules we want to bake into the protocol to make it better for health care,” she says. The system must facilitate the exchange of complex health information between patients and providers, for example, as well as exchanges between providers, and between providers and payers—all while remaining secure from malicious attacks and complying with privacy regulations.

The best way to do all that is still far from clear. But Halamka and researchers at the MIT Media Lab have developed a prototype system called MedRec (pdf), using a private blockchain based on Ethereum. It automatically keeps track of who has permission to view and change a record of medications a person is taking.

Either way, blockchain’s potential for the health-care industry depends on whether hospitals, clinics, and other organizations are willing to help create the technical infrastructure required. To that end, Gem is working with clients to prototype a global, blockchain-based patient identifier that could be linked to hospital records as well as data from other sources like employee wellness programs and wearable health monitors. It could be just the thing to sew together the maddening patchwork of digital systems available now.

BUSINESS GUESTHow blockchain will finally convert you: Control over your own dataBEN DICKSON, TECHTALKS@BENDEE983 SEPTEMBER 9, 2017 12:10 PM

And then there are cases like the massive data breach Equifax reported this week, where 143 million consumers’ social security numbers, addresses, and other data was exposed to hackers and identity thieves. This is where blockchain and distributed ledgers promise consumers real value. Blockchain’s architecture enables user data to be siloed from the server applications that use it. A handful of companies are exploring the concept to put users back in control of their data.

Pillar, another open-source blockchain project, is developing what it calls a personal data locker and “smart wallet.” Pillar is a mobile app that stores and manages your digital assets on the blockchain, where you have full ownership and control. These assets can be cryptocurrencies, health records, contact information, documents, and more. Pillar also aims to address another fundamental problem: The average consumer’s lack of interest in managing their own data.

Projects such as Enigma employ blockchain to preserve user data privacy while sharing it with cloud services and third parties. Enigma’s platform protects data by encrypting it, splitting it into several pieces and randomly distributing those indecipherable chunks across multiple nodes in its network. Enigma uses “secure multiparty computation” for its operations: Each node performs calculations on its individual chunk of data and returns the result to the user, who can then combine it with others to assemble the final result.

For “The EU General Data Protection Regulation (GDPR)”

Page 11: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Philippe Sanseau Comments and further literature background: Reproducibility #4 Datasets with Torrent

Making 22.41TB of research data available!http://academictorrents.com/

We've designed a distributed system for sharing enormous datasets - for researchers, by researchers. The result is a scalable, secure, and fault-tolerant repository for data, with blazing fast download speeds. Contact us at [email protected].

Accelerate your hosting for free with our academic BitTorrent infrastructure!

+ One aim of this site is to create the infrastructure to allow open access journals to operate at low cost. By facilitating file transfers, the journal can focus on its core mission of providing world class research. After peer review the paper can be indexed on this site and disseminated throughout our system.

+ Large dataset delivery can be supported by researchers in the field that have the dataset on their machine. A popular large dataset doesn't need to be housed centrally. Researchers can have part of the dataset they are working on and they can help host it together.

+ Libraries can host this data to host papers from their own campus without becoming the only source of the data. So even if a library's system is broken other universities can participate in getting that data into the hands of researchers.

Page 12: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston Hide Could Machine Learning Ever Cure Alzheimer’s Disease?

Page 13: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston Hide Could Machine Learning Ever Cure Alzheimer’s Disease? #1

Machinelearnists Mantra Traceability Interpretability Reproducability Validatable

Barbara Engelhardt – Latent factor modelsProf. Engelhardt is a PI in the Genotype-Tissue Expression (GTEx) consortium.

Engelhardt, Barbara E., and Matthew Stephens. "Analysis of population structure: a unifying framework and novel methods based on sparse factor analysis." PLoS genetics 6.9 (2010): e1001117.https://doi.org/10.1371/journal.pgen.1001117

Pathways outperform genes [Genome wide association studies (GWAS) ] as classifiers

Holly F. Ainsworth et al. (2017): The use of causal inference techniques to integrate omics and GWAS data has the potential to improve biological understanding of the pathways leading to disease. Our study demonstrates the suitability of various methods for performing causal inference under several biologically plausible scenarios.

KEYWORDS Bayesian networks, causal inference, Mendelian randomisation, structural equation modelling

Pathprint robust to batch effect and allows compraison of gene expression at the pathway level across multiple array platforms

Altschuler G, Hofmann O, Kalatskaya I, Payne R, Ho Sui SJ, Saxena U, Krivtsov AV, Armstrong SA, Cai T, Stein L and Hide WA (2013). “Pathprinting: An integrative approach to understand the functional basis of disease.” Genome Med, pp. 68–81. Bioconductor: https://doi.org/doi:10.18129/B9.bioc.pathprint | Cited by 6 articles

Taking Bioinformatics to Systems Medicine

Antoine H. C. van Kampen, Perry D. Moerlandhttps://doi.org/10.1007/978-1-4939-3283-2_2

Page 14: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston Hide Could Machine Learning Ever Cure Alzheimer’s Disease? #2

How to pathways relate to each other? Geneset relate to other curated and data

derived genesets / pathways? Experimental signature on a high level map of

cellular function Core pathways driving a phenotype? Relationship with Genetic/genome upstream

perturbation and the functional phenotype?

Understanding the relative role of a function Gene set enrichment – edges of the graph represent mutual overlap

Isserlin, Ruth, et al. "Enrichment Map–a Cytoscape app to visualize and explore OMICs pathway enrichment results." F1000Research 3 (2014).https://dx.doi.org/10.12688/f1000research.4536.1

Felgueiras, Juliana, Joana Vieira Silva, and Margarida Fardilha. "Adding biological meaning to human protein-protein interactions identified by yeast two-hybrid screenings: A guide through bioinformatics tools." Journal of Proteomics (2017). https://doi.org/10.1016/j.jprot.2017.05.012

Michaut, Magali, et al. "Integration of genomic, transcriptomic and proteomic data identifies two biologically distinct subtypes of invasive lobular breast cancer." Scientific reports 6 (2016): 18517. http://doi.org/10.1038/srep18517

Cited by 23

Maia, Ana-Teresa, et al. "Big data in cancer genomics." Current Opinion in Systems Biology 4 (2017): 78-84. https://doi.org/10.1016/j.coisb.2017.07.007

Page 15: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston Hide Could Machine Learning Ever Cure Alzheimer’s Disease? #3

→ Pathway Coexpression Network Reproducibility “Biologists are likely to find that larger studies turn up more and more genetic variants – or “hits” - that have minuscule influences on disease” - Jonathan Pritchard, Stanford University

Gaps in understanding about biochemical networks.

“We might not actually be learning anything hugely interesting until we understand how these networks are connected”- Joe Pickrell, New York Genome Center

New concerns raised over value of genome-wide disease studies Nature 10.1038/nature.2017.22152

Ewen Callaway 15 June 2017

https://doi.org/10.1016/j.jbi.2009.09.005

https://doi.org/10.1093/nar/gkw797

http://dx.doi.org/10.1126/science.1087447

http://dx.doi.org/10.1111/gbb.12106

Page 16: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston Hide Could Machine Learning Ever Cure Alzheimer’s Disease? #4

Sheffield Institute for Translational Neuroscience

Harvard School of Public Health (Yered Hammurabi Pita-Juarez and Les Kobzik)Massachusetts Institute of Technology (Manolis Kellis)Cure Alzheimer’s Fund (Rudy Tanzi)Centre for Genome Translation (Gabriel Altschuler, Vivien Junker, Wenbin Wei, Sarah Morgan, Katjuša Koler, Sandeep Amberkar, David Jones, Sokratis Kariotis, Claira Green)

Winston hide (@winhide) | Twitter

https://hidelab.wordpress.com/

Small world property of gene networks most expressed disease associated genes are only a few steps from the nearest core gene

Gaiteri and Sibille (2011)https://doi.org/10.3389/fnins.2011.00095

Schematic of relationship between network structure and differential expression incorporating all results.

Page 17: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: Graphs for understanding genes #1

Geometric Deep Learninghttps://www.slideshare.net/PetteriTeikariPhD/geometric-deep-learning

How Different Are Estimated Genetic Networks of Cancer Subtypes?Ali Shojaie, Nafiseh Sedaghat22 March 2017 | Big and Complex Data Analysis pp 159-192https://doi.org/10.1007/978-3-319-41573-4_9

Genomic analysis of regulatory network dynamics reveals large topological changesNicholas M. Luscombe, M. Madan Babu, Haiyuan Yu, Michael Snyder, Sarah A. Teichmann & Mark GersteinNature 431, 308-312 (16 September 2004) http://dx.doi.org/10.1038/nature02782

http://doi.org/10.1126/science.298.5594.824

Page 18: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network biology’ #1A: Functional connectome

Functional connectome fingerprinting: identifying individuals using patterns of brain connectivityEmily S Finn, Xilin Shen, Dustin Scheinost, Monica D Rosenberg, Jessica Huang, Marvin M Chun, Xenophon Papademetris & R Todd ConstableNature Neuroscience 18, 1664–1671 (2015) doi: 10.1038/nn.4135

The dynamic functional connectome: State-of-the-art and perspectivesMaria Giulia Preti, Thomas AW Bolton, Dimitri Van De VilleNeuroImage (Available online 26 December 2016)https://doi.org/10.1016/j.neuroimage.2016.12.061

Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanismsJoana Cabral, Morten L. Kringelbach, Gustavo DecoNeuroImage (Available online 23 March 2017)https://doi.org/10.1016/j.neuroimage.2017.03.045

Connectome imaging for mapping human brain pathwaysY Shi and A W TogaMolecular Psychiatry (2017) 22, 1230–1240; doi: 10.1038/mp.2017.92

“Using connectome imaging, we have the opportunity to develop robust algorithms and software tools to systematically characterize the integrity of these circuits. In addition to in-depth modeling and quantification of these brain circuits, connectome-based parcellation will produce whole-brain network models at much finer resolution than existing works. Together with multimodal fusion strategies, these connectome features will form a set of deep phenotypes for mining with genetic and behavioral data. This matches perfectly with current developments in Big Data and deep learning methods.”

Structural vs functional connectivity. (Left) Advanced tractography algorithms allow reconstructing the white matter fiber tracts from Diffusion-MRI. The structural connectivity matrix SC(n,p) is estimated in proportion to the number of fiber tracts detected between any two brain areas n and p. (Right) On the other hand, the functional connectivity matrix FC(n,p) is computed as the correlation between the brain activity (e.g. BOLD signal) estimated in areas n and p over the whole recording time. Here, the areas refer to 90 non-cerebellar brain areas from the AAL template. - Cabral et al. (2017)

Page 19: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network biology’ #1B: Functional connectome

Emerging Frontiers of Neuroengineering: A Network Science of Brain ConnectivityDanielle S. Bassett, Ankit N. Khambhati, and Scott T. Grafton Annual Review of Biomedical Engineering Vol. 19:327-352 (Volume publication date June 2017)https://doi.org/10.1146/annurev-bioeng-071516-044511

Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems from micro- to macroscales. We present examples of how human MRI brain imaging data (or EEG, MEG, ECOG, fNIRS, etc.) are being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers and emphasize their utility in informing diagnosis and monitoring, brain–machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights that are critical for the neuroengineer's tool kit.

Multiscale topology in brain networks. Brain networks express fundamental organizing principles across multiple spatial scales. Brain networks are modeled as a collection of nodes (representing regions of interest with presumably coherent functional responsibilities) and edges (structural connections or functional interactions between brain regions). Constructing connectomes from magnetic resonance imaging (MRI) data. To

generate human connectomes with MRI, an anatomic scan delineating gray matter is partitioned into a set of nodes. This scan is combined with either diffusion scans of white matter structural connections or time series of brain activity measured by functional MRI, resulting in a weighted connectivity matrix.

Page 20: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network biology’ #1C: Functional connectome

Emerging Frontiers of Neuroengineering: A Network Science of Brain ConnectivityDanielle S. Bassett, Ankit N. Khambhati, and Scott T. Grafton Annual Review of Biomedical Engineering Vol. 19:327-352 (Volume publication date June 2017)https://doi.org/10.1146/annurev-bioeng-071516-044511

Tools for higher-order interactions from algebraic topology. (a) The human connectome is a complex network architecture that contains both dyadic and higher-order interactions. Graph representations of the human connectome encode only dyadic relationships, leaving higher-order interactions unaccounted for. A natural way in which to encode higher-order interactions is in the language of algebraic topology, which defines building blocks called simplices (Giusti et al. 2016): A 0-simplex is a node, a 1-simplex is an edge between two nodes, a 2-simplex is a filled triangle, and so on.

Brain network regulation and control can help navigate dynamical states. To accomplish behavioral and cognitive goals, brain networks internally navigate a complex space of dynamical states. Putative brain states may be situated in various peaks and troughs of an energy landscape, requiring the brain to expend metabolic energy to move from the current state to the next state. Within the space of possible dynamical states, there are easily accessible states and harder-to-reach states; in some cases, the accessible states are healthy, whereas in other cases, they may contribute to dysfunction, and similarly for the harder-to-reach states. Two commonly observed control strategies used by brain networks are average control and modal control. In average control, highly central nodes navigate the brain towards easy-to-reach states. In contrast, modal control nodes tend to be isolated brain regions that navigate the brain toward hard-to-reach states that may require additional energy expenditure (Gu et al. 2015). As a self-regulation mechanism for preventing transitions towards damaging states, the brain may employ cooperative and antagonistic push–pull strategies (Khambhati et al. 2016). In such a framework, the propensity for the brain to transition toward a damaging state might be competitively limited by opposing modal and average controllers whose goal would be to pull the brain toward less damaging states.

Network control theory offers a powerful tool set for neuroengineers concerns how to exogenously control a neural system and accurately predict the outcome on neurophysiological dynamics—and, by extension, cognition and behavior. Indeed, how to target, tune, and optimize stimulation interventions is one of the most pressing challenges in the treatment of Parkinson disease and epilepsy, for example (Johnson et al. 2013). More broadly, this question directly affects the targeting of optogenetic stimulation in animals (Ching et al. 2013) and the use of invasive and noninvasive stimulation in humans (e.g., deep brain, grid, transcranial magnetic, transcranial direct current, and transcranial alternating current stimulation) (Muldoon et al. 2016).

Clinical translation of network neuroscience tools. Network neuroscience offers a natural framework for improving tools to diagnose and treat brain network disorders (e.g. epilepsy). … Functional connectivity patterns demonstrate strong interactions around the brain regions in which seizures begin and weak projections to the brain regions in which seizures spread. Objective tools in network neuroscience can usher in an era of personalized algorithms capable of mapping epileptic network architecture from neural signals and pinpointing implantable neurostimulation devices to specific brain regions for intervention (, Khambhati et al. 2016,2015, Muldoon et al. 2016)

Page 21: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network biology’ #1D: Functional connectome

Connectivity Inference from Neural Recording Data: Challenges, Mathematical Bases and Research DirectionsIldefons Magrans de Abril, Junichiro Yoshimoto, Kenji Doya (Submitted on 6 Aug 2017) https://arxiv.org/abs/1708.01888

Connectivity inference itself is an interesting and deep mathematical problem, but the goal of connectivity inference is not only to precisely estimate the connection weight matrix, but also to illustrate how neural circuits realize specific functions, such as sensory inference, motor control, and decision making.

If we can perfectly estimate network connections from anatomical and activity data, then computer simulation of the network model should be able to reproduce the function of the network. But given inevitable uncertainties in connectivity inference, reconstruction of function in a purely data-driven way might be difficult. How to extract or infer a functional or computational network from a data-driven network, or even to combine known functional constraints as a prior for connectivity inference, is a possible direction of future research.

Page 22: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network biology’ #2: Brain and Genes

Inter-regional ECoG correlations predicted by communication dynamics, geometry, and correlated gene expressionRichard F. Betzel, John D. Medaglia, Ari E. Kahn, Jonathan Soffer, Daniel R. Schonhaut, Danielle S. Bassett (Submitted on 19 Jun 2017)https://arxiv.org/abs/1706.06088

Our models accurately predict out-of-sample electrocorticography (ECoG) networks and perform well even when fit to data from individual subjects, suggesting shared organizing principles across persons. In addition, we identify a set of genes whose brain-wide co-expression is highly correlated with ECoG network organization. Using gene ontology analysis, we show that these same genes are enriched for membrane and ion channel maintenance and function, suggesting a molecular underpinning of ECoG connectivity. Our findings provide fundamental understanding of the factors that influence interregional ECoG networks, and open the possibility for predictive modeling of surgical outcomes in disease.

Page 23: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network Science Modelling’ #1

Modelling And Interpreting Network DynamicsAnkit N Khambhati, Ann E Sizemore, Richard F Betzel, Danielle S Bassett bioRxiv https://doi.org/10.1101/124016

Pharmacologic modulation of network dynamics. (A) By blocking or enhancing neurotransmitter release through pharmacologic manipulation, investigators can perturb the dynamics of brain activity. For example, an NMDA receptor agonist might hyper-excite brain activity, while a NMDA receptor antagonist might reduce levels of brain activity. (B) Hypothetically speaking, by exogenously modulating levels of a neurotransmitter, one might be able to titrate the dynamics of brain activity and the accompanying functional connectivity to avoid potentially damaging brain states.

Page 24: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network Science Modelling’ #2A: Multilayer Networks

Isomorphisms in Multilayer NetworksMikko Kivelä and Mason A. Porter. Oxford Centre for Industrial and Applied Mathematics,last revised 16 Feb 2017https://arxiv.org/abs/1506.00508

We reduce each of the multilayer network isomorphism problems to a graph isomorphism problem, where the size of the graph isomorphism problem grows linearly with the size of the multilayer network isomorphism problem. One can thus use software that has been developed to solve graph isomorphism problems as a practical means for solving multilayer network isomorphism problems. Our theory lays a foundation for extending many network analysis methods --- including motifs, graphlets, structural roles, and network alignment --- to any multilayer network.

Perhaps the most exciting direction in research on multilayer networks is the development of methods and models that are not direct generalizations of any of the traditional methods and models for ordinary graphs [Kivelä et al. 2014]. The fact that there are multiple types of isomorphisms opens up the possibility to help develop such methodology by comparing different types of isomorphism classes. We also believe that there will be an increasing need for the study of networks that have multiple aspects (e.g., both time-dependence and multiplexity), and our isomorphism framework is ready to be used for such networks.

Efforts aimed at understanding and integrating the study of social and brain network dynamics will advance understanding of basic psychological principles and aid in deriving fundamental principles about the organization of society. However, even beyond fundamental knowledge, work at this intersection has the potential to improve real-world practice in clinical treatments for mental and physical disorders, predicting behavior change in response to persuasive messages, and improving educational outcomes including learning and creativity. For example, if people whose brain and/or social networks show differential response to treatments, logged information (e.g., from social media) could aid in providing tailored interventions

Brain and Social Networks: Fundamental Building Blocks of Human ExperienceEmily B. Falk, Danielle S. Bassett University of PennsylvaniaTrends in Cognitive Sciences Volume 21, Issue 9, September 2017, Pages 674-690

https://doi.org/10.1016/j.tics.2017.06.009

Page 25: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network Science Modelling’ #2B: Multilayer Networks

Multilayer Brain NetworksMichael Vaiana, Sarah Muldoon(Submitted on 7 Sep 2017)https://arxiv.org/abs/1709.02325

Here, we review multilayer networks and their applications in neuroscience, showing how incorporating the multilayer framework into network neuroscience analysis has uncovered previously hidden features of brain networks. We specifically highlight the use of multilayer networks to model disease, structure-function relationships, network evolution, and link multi-scale data. Finally, we close with a discussion of promising new directions of multilayer network neuroscience research and propose a modified definition of multilayer networks designed to unite and clarify the use of the multilayer formalism in describing real-world systems.

a. A cartoon illustrating how glia serve to distribute resources neural synapses. b. A simplified graph representing the two layer glia-neuron network model.

Despite the utility of multilayer networks, to date, there are relatively few neuroscientific studies that incorporate the multilayer framework. It will be important for future research to utilize the ever expanding knowledge base and set of measures for multilayer networks as well as drive development of measures with improved sensitivity and specificity for the many potential applications. The multilayer network framework has the potential to become the prominent mode of network analysis in the future, as neuroscientists face increasingly multi-modal, multi-temporal, or multi-scale data. Multilayer network science is in its infancy and comprehensive research into the structure and function of brain networks will be necessary as both multilayer networks and neuroscience develop in tandem.

Page 26: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Network Science Modelling’ #3: Dynamic Connectivity

Dynamic Graph Metrics: Tutorial, Toolbox, and TaleAnn E. Sizemore, Danielle S. Bassett University of Pennsylvania(Submitted on 30 Mar 2017)https://arxiv.org/abs/1703.10643

Visualizations of dynamic networks. (a) Stacked static network representation of a dynamic network on ten nodes. (b) Time-aggregated graph of dynamic network in (a). Any two nodes that are connected at any time in (a) are connected in this graph. (c) Visualization of network in (a) as contacts across time. (d) Dynamic network of one individual during a motor learning task. Green regions correspond to a functional module composed of motor areas, blue regions correspond to a functional module composed of visual regions, and red regions correspond to areas that were not in either the motor or visual module

Time respecting paths. (a) (Left) Time aggregated network from Fig. 1b with green and blue paths highlighted. (Right) Contact sequence plot from Fig. 1c with green and blue paths highlighted. (b) The source set of the peach node indicated with a peach ring. (c) Composition of the source set of nodes from the visual (left) and motor (right) modules of our example empirical fMRI data set, depicted across time. The gray line indicates the fraction of all nodes in the source set, while the blue and green lines represent the fraction of the visual and motor nodes within the source set, respectively. (d) Illustration of the set of influence (t−8) of the gold node. Nodes within this set indicated with a gold ring at the time at which they can first be reached by the gold node. (e) Composition of the set of influence calculated from nodes within the visual (left) and motor (right) groups. As in (c), the fraction of all regions (gray), visual regions (blue), and motor regions (green) are plotted against time. Solid lines in (c) and (e) mark the average over subjects and trials, and shaded regions represent two standard deviations from this average.

Page 27: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Perturbing Neural Networks’ #1

Control of Dynamics in Brain NetworksEvelyn Tang, Danielle S. Bassett(Submitted on 6 Jan 2017 (v1), last revised 19 Jan 2017 (this version, v2))https://arxiv.org/abs/1701.01531

Model for adaptive cognitive control showing distinct mechanisms between different brain regions. Schematic of a neural network connecting the prefrontal cortex, which executes much of the “top-down” control actions, to other brain regions. Another part of the brain – the anterior cingulate cortex – serves as a conflict monitoring mechanism that modulates the activity of control representations, while an adaptive gating mechanism regulates the updating of control representations in prefrontal cortex through dopaminergic projections.

Controllability metrics are positively correlated with age, with older youth displaying greater average and modal controllability than younger youth. Each data point represents the average strength of controllability metrics calculated on the brain network of a single individual, in a cohort of 882 healthy youth from ages 8 to 22. Brain networks were found to be optimized to support energetically easy transitions (average controllability) as well as energetically costly ones (modal controllability). The color bar denotes the age of the subjects, illustrating a significant correlation between age and the ability to support this diverse range of dynamics (Tang et al., 2016).

Page 28: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Perturbing Neural Networks’ #2

Topological Principles of Control in Dynamical Network SystemsJason Kim, Jonathan M. Soffer, Ari E. Kahn, Jean M. Vettel, Fabio Pasqualetti, Danielle S. Bassett(Submitted on 1 Feb 2017 (v1), last revised 6 Feb 2017 (this version, v2))https://arxiv.org/abs/1709.02325

Network Control of the Drosophila, Mouse, and Human Connectomes. (a) A representation of the mouse brain via the Allen Mouse Brain Atlas, with a superimposed simplified network. Each brain region is represented as a vertex, and the connections between regions are represented as directed edges.

The Simplified Network Representation Offers a Reasonable Prediction for the Full Network’s Control Energy. (a) Graphical representation of a non-simplified network of N drivers (red) and M non-drivers (blue), with directed connections between all nodes present.

Energetically Favorable Organization of Topological Features in Networks

To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from drosophila, mouse, and human brains. We use these principles to show that connectomes of increasingly complex species are wired to reduce control energy. We then use the analytical expressions we derive to perform targeted manipulation of the brain’s control profile by removing single edges in the network, a manipulation that is accessible to current clinical techniques in patients with neurological disorders. Cross-species comparisons suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs, while remaining unexpectedly robust to perturbations. Generally, our results ground the expectation of a system’s dynamical behavior in its network architecture, and directly inspire new directions in network analysis and design via distributed control.

Page 29: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ‘Perturbing Neural Networks’ #3

Mind Control as a Guide for the MindJohn D. Medaglia, Perry Zurn, Walter Sinnott-Armstrong, Danielle S. Bassett(Submitted on 13 Oct 2016 (v1), last revised 25 Apr 2017 (this version, v2))https://arxiv.org/abs/1610.04134v2

A block diagram of a PID controller

The ethics of brain control As efforts to guide complex brain processes advance, we will not only need new theoretical and technical tools. We will also face new societal, legal, and ethical challenges. Our best chance of meeting those challenges is through ongoing, rigorous discussion between scientists, ethicists, and policy makers. Rethinking human persons As mind control develops, the ability to interact intelligently with human nature may bring certain stakes into sharper focus. Humans privilege the notion of a “mind” and perceived internal locus of control as central to their identities [Wilson and Lenart 2014]. Further, within minds, humans privilege some traits, such as social comfort, honesty, kindness, empathy, and fairness, as more fundamental than functions, such as concentration, wakefulness, and memory [Riis et al. 2008]. These different values depend on the notion of conscious identity and are often at the core of common ethical distinctions applied to humans versus other animals [Olson 1999]. Importantly, modern notions of human persons, influenced by continuing advances in the cognitive and brain sciences, erode the classical boundary between the ethical treatment of humans and animals [Singer 2011]. … For this reason, scientists, clinicians, ethicists, and philosophers will need to work together.

Page 30: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: Clinical use for Network Analysis #1

Modern network science of neurological disordersCornelis J. StamNature Reviews Neuroscience 2014http://dx.doi.org/10.1038/nrn3801

Recent developments in the application of network science to conditions such as Alzheimer’s disease, multiple sclerosis, traumatic brain injury and epilepsy have challenged the classical concept of neurological disorders being either ‘local’ or ‘global’, and have pointed to the overload and failure of hubs as a possible final common pathway in neurological disorders.

Clinical implications of omics and systems medicine: focus on predictive and individualized treatmentMikael BensonJournal of Internal Medicine (2105)http://dx.doi.org/10.1038/nrn3801

Page 31: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: Clinical use for Network Analysis #2

Challenges and opportunities for system biology standards and tools in medical researchKönig, M., Oellrich, A., Waltemath, D., Dobson, R. J. B., Hubbard, T. J. P., & Wolkenhauer, O. In Proceedings of the 7th Workshop on Ontologies and Data in Life Sciences, organized by the GI Workgroup Ontologies in Biomedicine and Life Sciences. (Vol. 1692). CEUR-WS.

https://kclpure.kcl.ac.uk/portal/files/59024860/final_submission_odls_2016.pdf

Illustration of the integration process of computational models and data from different sources. The integration strongly relies on the availability and detail of the ontologies used for the semantic annotations. User interfaces need to provide access to the simulation modules, but restrict the change of parameters to ranges that are safe w.r.t. a clinical application. SBML and CellML are standards used to encode models in a computable format. Electronic Health Records (EHRs) refers to any data recorded in a hospital or GP practice.

Network Medicine: Complex Systems in Human Disease and Therapeutics 23 Feb 2017 Joseph Loscalzo (Author), Albert-lászló Barabási (Author), Edwin K. Silverman (Author), Elliott M. Antman (Author), Michael E. Calderwood (Author)https://www.amazon.co.uk/Network-Medicine-Complex-Systems-Therapeutics/dp/0674436539/ref=sr_1_3?s=books&ie=UTF8&qid=1505672639&sr=1-3

Big data, genomics, and quantitative approaches to network-based analysis are combining to advance the frontiers of medicine as never before.Network Medicineintroduces this rapidly evolving field of medical research, which promises to revolutionise the diagnosis and treatment of human diseases.

Medical researchers have long sought to identify single molecular defects that cause diseases, with the goal of developing silver-bullet therapies to treat them. But this paradigm overlooks the inherent complexity of human diseases and has often led to treatments that are inadequate or fraught with adverse side effects. Rather than trying to force disease pathogenesis into a reductionist model, network medicine embraces the complexity of multiple influences on disease and relies on many different types of networks: from the cellular-molecular level of protein-protein interactions to correlational studies of gene expression in biological samples. The authors offer a systematic approach to understanding complex diseases while explaining network medicine’s unique features, including the application of modern genomics technologies, biostatistics and bioinformatics, and dynamic systems analysis of complex molecular networks in an integrative context.

Next generation of network medicine: interdisciplinary signaling approachesTamas Korcsmaros, Maria Victoria Schneiderand Giulio Superti-Furga DOI: 10.1039/C6IB00215C (Review Article) Integr. Biol., 2017, 9, 97-108

Precision Psychiatry Meets Network Medicine Network PsychiatryDavid Silbersweig, MD; Joseph Loscalzo, MD, PhDJAMA Psychiatry. 2017;74(7):665-666. doi :10.1001/jamapsychiatry.2017.0580

Network medicine: a new paradigm for cardiovascular disease research and beyondJörg Menche Cardiovascular Research, Volume 113, Issue 10, 1 August 2017, Pages e29–e30, https://doi.org/10.1093/cvr/cvx129

Interactome-based approaches to human diseaseMichael Caldera, Pisanu Buphamalai, Felix Müller, Jörg MencheCurrent Opinion in Systems Biology Volume 3, June 2017, Pages 88-94 https://doi.org/10.1016/j.coisb.2017.04.015

Page 32: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ’Network Deep Learning’ #1

Deep Learning Architecture with Dynamically Programmed Layers for Brain Connectome PredictionVivek Veeriah, Rohit Durvasula, Guo-Jun Qi University of Central Florida, Orlando, FL, USA

KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Pages 1205-1214 https://doi.org/10.1145/2783258.2783399

Identifying Connectivity Patterns for Brain Diseases via Multi-side-view Guided Deep ArchitecturesJingyuan Zhang, Bokai Cao, Sihong Xie, Chun-Ta Lu, Philip S. Yu, Ann B. RaginProceedings of the 2016 SIAM International Conference on Data Mininghttps://doi.org/10.1137/1.9781611974348.5

In this paper, we present a novel Multi-side-View guided AutoEncoder (MVAE) that incorporates multiple side views into the process of deep learning to tackle the bias in the construction of connectivity patterns caused by the scarce clinical data. Extensive experiments show that MVAE not only captures discriminative connectivity patterns for classification, but also discovers meaningful information for clinical interpretation.

There are several interesting directions for future work. Since brain connectomes and neuroimages can provide complementary information for brain diseases, one interesting direction of our future work is to explore both brain connectomes and neuroimages in deep learning (i.e. multimodal models). Another potential direction is to combine fMRI and DTI brain connectomes together, because the functional and structural connections together can provide rich information for learning deep feature representations.

Page 33: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ’Network Deep Learning’ #2A

Multi-view Graph Embedding with Hub Detection for Brain Network AnalysisGuixiang Ma, Chun-Ta Lu, Lifang He, Philip S. Yu, Ann B. Ragin(Submitted on 12 Sep 2017)https://arxiv.org/abs/1709.03659

In this paper, we present MVGE-HD, an auto-weighted framework of Multi-view Graph Embedding with Hub Detection for brain network analysis. We incorporate the hub detection task into the multi-view graph embedding framework so that the two tasks could benefit each other. The MVGE-HD framework learns a unified graph embedding across all the views while reducing the potential influence of the hubs on blurring the boundaries between node clusters in the graph, thus leading to a clear and discriminative node clustering structure for the graph. The extensive experimental results on two real multi-view brain network datasets (i.e., HIV and Bipolar disorder) demonstrate the effectiveness and the superior performance of the proposed framework for brain network analysis.

Identifying Deep Contrasting Networks from Time Series Data: Application to Brain Network AnalysisJohn Boaz Lee, Xiangnan Kong, Yihan Bao, Constance MooreProceedings of the 2017 SIAM International Conference on Data Mininghttps://doi.org/10.1137/1.9781611974973.61

We propose a method called GCC (Graph Construction CNN) which is based on deep convolutional neural networks for the task of network construction. The CNN in our model learns a nonlinear edge-weighting function to assign discriminative values to the edges of a network We also demonstrate the extensibility of our proposed framework by combining it with an autoencoder to capture subgraph patterns from the constructed networks.

Page 34: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ’Network Deep Learning’ #2B

Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICAAapo Hyvärinen and Hiroshi Morioka(Submitted on 20 May 2016)https://arxiv.org/abs/1605.06336

Methods such as noise-contrastive estimation [Gutmann and Hyvärinen 2012] and generative adversarial nets [Goodfellow et al. 2014], see also [Gutmann et al. 2014], are similar in spirit, but clearly distinct from TCL which uses the temporal structure of the data by contrasting different time segments. In practice, the feature extractor needs to be capable of approximating a general nonlinear relationship between the data points and the log-odds of the classes

Nonlinear ICA of Temporally Dependent Stationary SourcesAapo Hyvärinen and Hiroshi MoriokaAppearing in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, Florida, USA. JMLR: W&CP volume 54.http://discovery.ucl.ac.uk/1547625/1/AISTATS2017.pdf

Independently Controllable FactorsValentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, Yoshua Bengio(Submitted on 3 Aug 2017 (v1), last revised 25 Aug 2017 (this version, v2))https://arxiv.org/abs/1708.01289

Note that there may be several other ways to discover and disentangle underlying factors of variation. … non-linear versions of ICA (e.g. Hyvärinen and Morioka) attempt to disentangle the underlying factors of variation by assuming that their joint distribution (marginalizing out the observed x) factorizes, i.e., that they are marginally independent. Here we explore another direction, trying to exploit the ability of a learning agent to act in the world in order impose a further constraint on the representation.

Page 35: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ’Network Deep Learning’ #3

t-BNE: Tensor-based Brain Network EmbeddingBokai Cao, Lifang He, Xiaokai Wei, Mengqi Xing, Philip S. Yu, Heide Klumpp, Alex D. LeowProceedings of the 2017 SIAM International Conference on Data Mininghttp://doi.org/10.1137/1.9781611974973.22https://www.cs.uic.edu/~bcao1/code/t-BNE.zip

Brain network embedding is the process of converting brain network data to discriminative representations of subjects, so that patients with brain disorders and normal controls can be easily separated. However, existing methods either limit themselves to extracting graph-theoretical measures and subgraph patterns, or fail to incorporate brain network properties and domain knowledge in medical science.

In this paper, we propose t-BNE, a novel Brain Network Embedding model based on constrained tensor factorization. t-BNE incorporates:

1) symmetric property of brain networks,

2) side information guidance to obtain representations consistent with auxiliary measures,

3) orthogonal constraint to make the latent factors distinct with each other, and

4) classifier learning procedure to introduce supervision from labeled data

The brain network embedding problem can be further investigated in several directions for future work. For example, we would like to work with domain experts to incorporate a wider variety of guidance and supervision (‘medical knowledge graph’), and learn a joint representation from multimodal brain network data.

Page 36: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Winston HideComments and further literature background: ’Network Deep Learning’ #4

Structural Deep Brain Network MiningBokai Cao, Lifang He, Xiaokai Wei, Mengqi Xing, Philip S. Yu, Heide Klumpp, Alex D. LeowKDD '17 Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mininghttps://doi.org/10.1145/3097983.3097988

Mining from neuroimaging data is becoming increasingly popular in the field of healthcare and bioinformatics, due to its potential to discover clinically meaningful structure patterns that could facilitate the understanding and diagnosis of neurological and neuropsychiatric disorders.

In this paper, we propose a Structural Deep Brain Network mining method, namely SDBN, to learn highly non-linear and structure-preserving representations of brain networks. Specifically, we first introduce a novel graph reordering approach based on module identification, which rearranges the order of the nodes to preserve the modular structure of the graph. (…) Further, it has better generalization capability for high-dimensional brain networks and works well even for small sample learning. Benefit from CNN's task-oriented learning style, the learned hierarchical representation is meaningful for the clinical task. To evaluate the proposed SDBN method, we conduct extensive experiments on four real brain network datasets for disease diagnoses. The experiment results show that SDBN can capture discriminative and meaningful structural graph representations for brain disorder diagnosis.

Since the proposed deep feature learning framework is end-to-end and task-oriented, its application is not limited to binary disease classification. It can be easily extended to the other clinical task with objectives such as multi-class classification, clustering, regression and ranking. We plan to apply our framework for the other medical task.

Page 37: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Mike BarnesEndotype discovery and response stratification in Immune-Inflammatory diseases

Page 38: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Mike BarnesEndotype discovery and response stratification in Immune-Inflammatory diseases #1

Shared pathology: Rheumatoid Arthritis (RA), Psoriasis and Systemic Lupus Erythematosus (SLE)

IMIDS – A Treatment Continuum: Endotype match, immunogenicity, disease evolution, side effects (infections, off “target”)

Why IMID Endotypes matter (a) Random Patient Selection, (b) Targeted Clinical Trial

Joints of the hand offer a nice way to quantify disease progression and differentiate pathology types

Page 39: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Mike BarnesEndotype discovery and response stratification in Immune-Inflammatory diseases #2

Hunting IMID (Immunomodulatory imide drugs) Endotypes: Response biomarkers, drug endotype, disease endotype and Multi-Omics

IMID Biologic targets – a highly connected system

Endotype identification is important. Same drug can be good for one endotype, and bad for another endotype

Page 40: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Mike BarnesEndotype discovery and response stratification in Immune-Inflammatory diseases #3

TranSMART/i2b2: i2b2 for Healthcare and Health Information Systems; tranSMART for clinical research.

http://transmartfoundation.org/https://github.com/transmart

PSORT TranSMART/i2b2: Data Infrastructrure

Latent Class Mixed Models (LCMM) Find groups or subtypes in multivariate categorical data. Essentially Latent Class Analysis (LCA) for longitudinal data

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4333702/

Page 41: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Mike BarnesEndotype discovery and response stratification in Immune-Inflammatory diseases #4

eMedLab is a hubhttp://www.emedlab.ac.uk/

→ IMIDBio-UK

Images, genomic, electronic health records (EHR)The Francis Crick Institute, UCL, Sanger, Farr, Queen

Mary, EMBL-EBI

The Selfish Scientist “A biologist would rather share their toothbrush than their (gene) names” - Mike Ashburner, Professor of Genetics, University of Cambridge, UK

from “The Seven Deadly Sins of Bioinformatics” by Carole Goble, The myGrid project, OMII-UKhttps://www.slideshare.net/dullhunk/the-seven-deadly-sins-of-bioinformatics

http://dx.doi.org/10.1038/498255a

Data Sharing by Scientists: Practices and PerceptionsCarol Tenopir, Suzie Allard, Kimberly Douglass, Arsev Umur Aydinoglu, Lei Wu, Eleanor Read, Maribeth Manoff, Mike FramePublished: June 29, 2011https://doi.org/10.1371/journal.pone.0021101

'Omics Data SharingField et al. (2011) | Science 09 Oct 2009: Vol. 326, Issue 5950, pp. 234-236http://dx.doi.org/10.1126/science.1180598

Data sharing as social dilemma: Influence of the researcher’s personalityLinek et al. (2017) | PlOS Onehttps://doi.org/10.1371/journal.pone.0183216

Scholarly use of social media and altmetrics: A review of the literatureSugimoto et al. (2017) | AIS Reviewhttp://doi.org/10.1002/asi.23833

Advantages of a Truly Open-Access Data-Sharing ModelBertagnolli et al. (2017) | The New England Journal of Medicine DOI: 10.1056/NEJMsb1702054

A Call for Open-Source Cost-Effectiveness AnalysisJoshua T. Cohen, PhD; Peter J. Neumann, ScD; John B. Wong, MD Ann Intern Med. 2017 | DOI: 10.7326/M17-1153

Data sharing in clinical trials: An experience with two large cancer screening trialsZhu et al. (2017) | PLOS Medicine | https://doi.org/10.1371/journal.pmed.1002304

Scholars in an increasingly open and digital world: imagined audiences and their impact on scholars’ online participationLearning, Media and Technology (2017)http://dx.doi.org/10.1080/17439884.2017.1305966

Principle of proportionality in genomic data sharingWright et al. (2016) | Nature Reviews Genetics 17, 1–2 (2016)http://dx.doi.org/10.1038/nrg.2015.5

OpenfMRI: Open sharing of task fMRI dataPoldrack and GorgolewskiNeuroImage Volume 144, Part B, January 2017, Pages 259–261 https://doi.org/10.1016/j.neuroimage.2015.05.073

Page 42: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Mike Barnes Comments and further literature background: Drug Sensitivity prediction

Transfer Learning Approaches to Improve Drug Sensitivity Prediction in Multiple Myeloma PatientsTurki Turki ; Zhi Wei ; Jason T. L. WangIEEE Access ( Volume: 5 ) https://doi.org/10.1109/ACCESS.2017.2696523

Compass in the data ocean: Toward chronotherapyRikuhiro G. Yamadaa and Hiroki R. UedaPNAS May 16, 2017 vol. 114 no. 20 http://dx.doi.org/10.1073/pnas.1705326114

Several reports have shown that internal body time varies by 5–6 h in healthy humans and by as much as 10–12 h in shift workers. Accumulating evidence suggests that those misalignments may be a link to health risks, including obesity (Roenneberg et al. 2012) and psychiatric disorders (Wulff et al. 2010).Recently, a research group reported that a majority of mammalian genes are under the clock regulation, and that markedly different genes show circadian oscillation in each tissue (Zhang et al. 2014). Importantly, they reported that a substantial number of top-selling drugs in the United States have circadian targets (Zhang et al. 2014). Based on those findings, a convenient and precise molecular measurement of tissue molecular time is needed. The report published in PNAS by Anafi et al. (2017) from the same research group strives to achieve this precise molecular measurement of tissue molecular time. Petteri: In other predicting when to administer the best personalized drug

Machine learning identifies a compact gene set for monitoring the circadian clock in human bloodJacob J. HugheyGenome Medicine20179:19https://doi.org/10.1186/s13073-017-0406-4

Here we used a recently developed method called ZeitZeiger to predict circadian time (CT, time of day according to the circadian clock) from genome-wide gene expression in human blood. Our results are an important step towards precision circadian medicine. In addition, our generalizable extensions to ZeitZeiger may be applicable to the growing number of biological datasets that contain multiple observations per individual.

Page 43: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Slava AkmaevArtificial Intelligence in Biopharma Research and Development

Page 45: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Slava AkmaevComments and further literature background

They are now a boring Bayesian company had a lot of problems with traction back in the day when the big boys did not believe that AI/Machine learning would have any real use in drug target discovery.

http://doi.org/10.1126/science.1105809 | Cited by 1205 articles

Featuring talk by Marco Scutari, University of Oxfordhttps://www.slideshare.net/BayesNetsMeetupLondon/bayes-nets-meetup-sept-29th-2016-bayesian-network-modelling-by-marco-scutari

A network perspective on patient experiences and health status: the Medical Expenditure Panel Survey 2004 to 2011Yi-Sheng Chao, Hau-tieng Wu, Marco Scutari, Tai-Shen Chen, Chao-Jung Wu, Madeleine Durand and Antoine BoivinBMC Health Services ResearchBMC series – open, inclusive and trusted 2017 17:579https://doi.org/10.1186/s12913-017-2496-5

https://www.meetup.com/London-Bayesian-network-Meetup-Group/events/233231685/

Granger causality vs. dynamic Bayesian network inference: a comparative studyCunlu Zou and Jianfeng Feng BMC Bioinformatics 2009 10:122 https://doi.org/10.1186/1471-2105-10-122

Reverse-engineering biological networks from large data setsJoseph L. Natale, David Hofmann, Damian G. Hernández, Ilya Nemenman(Submitted on 17 May 2017 (v1), last revised 25 May 2017 (this version, v2)) https://arxiv.org/abs/1705.06370

Page 46: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell BarryWhat can AI contribute to Neuroscience?

Page 47: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell BarryWhat can AI contribute to Neuroscience? #1

Problem is not the the computational power, but to figure out what information is present, how it is encoded and what computations are performed.

And how to generate hypotheses and models.

Deep learning is inspired by the brainPerceptron (neuron)Recurrent networks (e.g. LSTM, hippocampus)Convolutional networks (cat visual system)Deep Reinforcement Learning (behaviorism, dopaminergic system)

Deep Neural NetworksPredict response from stimuliPredict the stimuli from recorded responseBuild generative models from theseLet the models build themselves

http://dx.doi.org/10.1038/nature14541

Page 48: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell BarryWhat can AI contribute to Neuroscience? #1

Can we decode place cells using RNNs (LSTMs)?Multielectrode array (MEA) recordings from rodent modelTampuu A, Barry C, Vicente (in prep.)

Gradient Analysis

Surprisingly (or counterintuitively) the most informative cells were interneurons firing pretty much everywhere but with “defined” gradients, while the least informative cells was rather random (high entropy).

None of the neuroscience was not actually new and groundbreaking as admitted by Dr. Barry, but it was nice to see that the data-driven method came to the same conclusions as existing literature for example related to the sides of the place field.

John O’Keefe of University College London won half of the Nobel prize for his discovery in 1971 of ‘place’ cells in the hippocampus, a part of the brain associated with memory.

http://dx.doi.org/10.1038/514153a

Page 49: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell BarryWhat can AI contribute to Neuroscience? #2

Human Location decoding from fMRI?Simple spatial memory task in virtual reality environent

DNNs barely exceed SVM performanceToo little data for the used model capacity to actually overperform SVM. Future exploration for data augmentation and transfer learning approach (from “medical imagenet”?)

DNNs make good model of the visual system possible to decode brain responses to visual scenes

Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Brain's Ventral Visual Pathway Umut Güçlü,

Marcel A. J. van Gerven(Submitted on 24 Nov 2014)

https://arxiv.org/abs/1411.6422 https://doi.org/10.1523/JNEUROSCI.5023-14.2015

https://doi.org/10.1016/j.neuroimage.2017.08.027

Medical Image Net - Radiology Informaticshttp://langlotzlab.stanford.edu/projects/medical-image-net/

https://www.slideshare.net/PetteriTeikariPhD/medical-imagenet

Page 50: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell BarryWhat can AI contribute to Neuroscience? #2

Future DirectionsC. Elegans with its nice 302 neuron system as model organism for “functional connectome”

SummaryBiological neural networks vs artificial networks (spike trains, no backprop in brain, no negative firing rates, excitatory and inhibitory neurons)

A Transparent window into biology: A primer on Caenorhabditis elegans

by AK Corsi - Cited by 80 - Related articles

Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegansRamin M. Hasani, Magdalena Fuchs, Victoria Beneder, Radu Grosu(Submitted on 18 Mar 2017 (v1), last revised 25 Mar 2017 (this version, v3))https://arxiv.org/abs/1703.06264

SIM-CE: An Advanced Simulink Platform for Studying the Brain of Caenorhabditis elegansRamin M. Hasani, Victoria Beneder, Magdalena Fuchs, David Lung, Radu Grosu(Submitted on 18 Mar 2017 (v1), last revised 25 Mar 2017 (this version, v3))https://arxiv.org/abs/1703.06270

https://www.slideshare.net/PetteriTeikariPhD/prediction-of-art-market

Toward an Integration of Deep Learning and NeuroscienceHYPOTHESIS & THEORY ARTICLEFront. Comput. Neurosci., 14 September 2016http://dx.doi.org/10.3389/fncom.2016.00094

cited by → Cited by 42 articles

Page 51: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Generative models with C. Elegans model

Development of the C. elegans nervous system. (A) C. elegans reaches adulthood approximately 63 hours after fertilization, over which time its body increases appreciably in length. (B) In the adult hermaphrodite worm, neurons are distributed unevenly across the body, with more than 60% being located in the head and about 15% being located in the tail tip. Here, neurons are color-coded according to their membership to the following ganglia: anterior [A], dorsal [B], lateral [C], ventral [D], retrovesicular [E], ventral cord [G], posterior lateral [F], preanal [H], dorsorectal [J], and lumbar [K]. (C) The total number of neurons (N, solid black), and connections (K, dashed blue), grows nonlinearly but monotonically with time. (D) A phase transition is evident in the number of synapses as a function of the number of neurons (yellow circles): before hatching, K grows as N 2 (solid blue line), whereas after hatching, K grows linearly with N (dashed green line). (Inset) Plot of the average nodal degree, K, versus number of nodes, N. - Nicosia et al. (2013)

Generative Models for Network Neuroscience: Prospects and PromiseRichard F. Betzel, Danielle S. Bassett (Submitted on 26 Aug 2017)https://arxiv.org/abs/1708.07958

Applications using generative models. Model parameters can be fit to individual subjects and those parameters compared to some behavioral measures (A) or used to classify different populations from one another (B). Generative models can also be used to simulate the development of a biological neural network. These simulations can be used as forecasting devices to identify individuals at risk of developing maladaptive network topologies. They can also be used to explore possible interventions, e.g. perturbations to parameters or wiring rules, that drive an individual away from an unfavorable, maladaptive network topology towards a more favorable state.

Page 52: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Backpropagation useful even in Deep Learning?

Artificial intelligence pioneer says we need to start overSteve LeVine Sep 15 2017https://www.axios.com/ai-pioneer-advocates-starting-over-2485537027.html

Geoffrey Hinton harbors doubts about AI's current workhorse. (Johnny Guatto / University of Toronto)

Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now "deeply suspicious" of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. "My view is throw it all away and start again," he said.

But Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as "unsupervised learning," "I suspect that means getting rid of back-propagation." "I don't think it's how the brain works," he said. "We clearly don't need all the labeled data."

An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic PlasticityJames C. R. Whittington and Rafal Bogaczhttp://dx.doi.org/10.1162/NECO_a_00949

Towards Biologically Plausible Deep LearningYoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, Zhouhan Lin (2016)https://arxiv.org/abs/1502.04156

The graphical brain: belief propagation and active inferenceKarl J Friston , Thomas Parr and Bert de Vries (2017)http://dx.doi.org/10.1162/NETN_a_00018

Visual pathways from the perspective of cost functions and multi-task deep neural networksH. Steven Scholte, Max M. Losch, Kandan Ramakrishnan, Edward H.F. de Haan, Sander M. Bohte (2017)https://arxiv.org/abs/1706.01757

Neuroscience-Inspired Artificial IntelligenceDemis Hassabis, Dharshan Kumaran, Christopher Summerfield, MatthewBotvinick (2017)Neuron Volume 95, Issue 2, 19 July 2017, Pages 245-258https://doi.org/10.1016/j.neuron.2017.06.011

Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural NetworksHongyin Luo, Jie Fu, James Glass (2017)https://arxiv.org/abs/1702.07097

"Can the brain do back-propagation?" Geoffrey Hinton of Google & University of Toronto https://youtu.be/VIRCybGgHts

Seppo Linnainmaa, (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.

“In 1970, Linnainmaa introduced the reverse mode of automatic differentiation (AD), to efficiently compute the derivative of a differentiable composite function that can be represented as a graph, by recursively applying the chain rule to the building blocks of the function. This method is now heavily used in numerous applications. For example, Backpropagation of errors in multi-layer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.”

Page 53: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Deep learning & Neuroscience #1A:

By Petteri Teikarihttps://www.slideshare.net/PetteriTeikariPhD/prediction-of-art-market

Neural Encoding and Decoding with Deep Learning for Dynamic Natural VisionHaiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Zhongming Liu(Submitted on 11 Aug 2016)https://arxiv.org/abs/1608.03425

Sharing deep generative representation for perceived image reconstruction from human brain activityChangde Du, Changying Du, Huiguang He(Submitted on 25 Apr 2017 (v1), last revised 11 Jul 2017 (this version, v3))https://arxiv.org/abs/1704.07575

A primer on encoding models in sensory neuroscienceMarcel A.J. van GervenJournal of Mathematical Psychology Volume 76, Part B, February 2017, Pages 172-183https://doi.org/10.1016/j.jmp.2016.06.009

Seeing it all: Convolutional network layers map the function of the human visual systemMichael Eickenberg, Alexandre Gramfort, Gaël Varoquaux, BertrandThirion NeuroImage Volume 152, 15 May 2017, Pages 184-194https://doi.org/10.1016/j.neuroimage.2016.10.001

Page 54: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Deep learning & Neuroscience #1B:

Generative Models for Network Neuroscience: Prospects and PromiseRichard F. Betzel, Danielle S. Bassett (Submitted on 26 Aug 2017)https://arxiv.org/abs/1708.07958

While illuminating, the process of describing networks in terms of their topological properties amounts to an exercise in “fact collecting.” Though summary statistics might be useful for comparing individuals and as biomarkers of disease, they offer limited insight into the mechanisms by which a network functions, grows, and evolves. Arguably, one of the overarching goals of neuroscience (and biology, in general) is to manipulate or perturb networks in targeted and deliberate ways that result in repeatable and predictable outcomes. For network neuroscience to take steps in addressing this goal, it must shift its current emphasis beyond network taxonomy – i.e. studying subtle individual- or population-level differences in summary statistics – towards a science of mechanisms and process [22, 23].

Space of generative models. Generative models can be differentiated from one another along many dimensions, one of which is the timescale over which they operate. A model’s timescale is related to its neurobiological plausibility. Models whose timescale is nearer that of developmental time can incorporate more realistic and interpretable features and, in turn, have the chance of uncovering realistic growth mechanisms (e.g. the model of C. elegans). At the opposite end of the spectrum are “single shot” models, e.g. stochastic blockmodels, in which connection probabilities are initialized early on and all connections and weights are generated in a single algorithmic step. Situated between these extremes are growth models that exhibit intrinsic timescales over which connections and/or nodes are added to the network, but where the timescale has no clear biological interpretation

The requisite ingredients An open and important question that scientists face when embarking on a study to develop a generative model is: “What features are required to build good network models?” Perhaps the simplest feature one requires is a target network topology, the organization of the network that one is trying to recapitulate and ultimately explain. Yet, a single network topology can be built in many different ways, with strikingly different underlying mechanisms [52]. Thus one might also wish to have a deep understanding of (i) the contraints on anatomy, from physical distance [53] to energy consumption [54], (ii) the rules of neurobiological growth, from chemical gradients [55] to genetic specification [56], and (iii) the pressures of normal or abnormal development, and their relevance for functionality. Moreover, each of these constraints, rules, and pressures can change as the system grows, highlighting the importance of developmental timing [56]. Of course, one might also wish to choose which of these details to include in the model, with model parsimony being one of the key arguments in support of building models with fewer details.

FUTURE DIRECTIONS More novel possibility is to use the model for disease simulation. Many psychiatric [150] and neurodegenerative diseases [151] are manifest at the network level in the form of miswired or dysconnected systems, but it is unclear what predisposes an individual to evolve into a disease state. Similarly, the generative model can be used to explore in silico the effect of potential intervention strategies. We can think of biological neural networks as living in a highdimensional space based on their topological characteristics.

A major hindrance in realizing these goals, however, is the absence of data tailored for generative models. The ideal data would (i) be longitudinal, enabling one to track and incorporate individual-level changes over time in the model, and (ii) include multiple data modalities, such as functional and structural connectivity, and genetics, along with other select factors that could influence network level organization

Page 55: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Deep learning & Neuroscience #1C:

Deep adversarial neural decodingYağmur Güçlütürk, Umut Güçlü, Katja Seeliger, Sander Bosch, Rob van LierRadboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands(Submitted on 19 May 2017 (v1), last revised 15 Jun 2017 (this version, v3))https://arxiv.org/abs/1705.07109

Here, we present a new approach by combining probabilistic inference with deep learning, which we refer to as deep adversarial neural decoding (DAND). Our approach first inverts the linear transformation from latent features to observed responses with maximum a posteriori estimation. Next, it inverts the nonlinear transformation from perceived stimuli to latent features with adversarial training and convolutional neural networks. An illustration of our model is provided in Figure 1. We show that our approach achieves state-of-the-art reconstructions of perceived faces from the human brain.

We tested our approach by reconstructing face stimuli from BOLD responses at an unprecedented level of accuracy and detail, matching the target stimuli in several key aspects such as gender, skin color and facial features as well as identifying perceptual factors contributing to the reconstruction accuracy. Deep decoding approaches such as the one developed here are expected to play an important role in the development of new neuroprosthetic devices that operate by reading subjective information from the human brain.

Deep neural networks have been used for classifying or identifying stimuli via the use of a deep encoding model [Güçlü and M. van Gerven 2015, 2017] or by predicting intermediate stimulus features [Horikawa and Kamitani 2017, 2017b]. Deep belief networks and convolutional neural networks have been used to reconstruct basic stimuli (handwritten characters and geometric figures) from patterns of brain activity [van Gerven et al. 2010, Du et al. 2017]. To date, going beyond such mostly retinotopy-driven reconstructions and reconstructing complex naturalistic stimuli with high accuracy have proven to be difficult.

Page 56: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Deep learning & Neuroscience #1D:

Deep learning with convolutional neural networks for EEG decoding and visualizationRobin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, Tonio BallTranslational Neurotechnology Lab, Epilepsy Center, Medical Center – University of Freiburg, Freiburg, Germany | BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Freiburg, Germany

Human Brain Mapping (2017)http://dx.doi.org/10.1002/hbm.23730

Here we present two novel methods for feature visualization that we used to gain insights into our ConvNet learned from the neuronal data. Here we present two novel methods for feature visualization that we used to gain insights into our ConvNet learned from the neuronal data.The motivation for developing our visualization methods was threefold:

Verify that the ConvNets are using actual brain signals

Gain insights into the ConvNet behavior, e.g., what EEG features the ConvNet uses to decode the signal

Potentially make steps toward using ConvNets for brain mapping.

The EEG signal has characteristics that make it different from inputs that ConvNets have been most successful on, namely images. In contrast to two-dimensional static images, the EEG signal is a dynamic time series from electrode measurements obtained on the three-dimensional scalp surface. Also, the EEG signal has a comparatively low signal-to-noise ratio, that is, sources that have no task-relevant information often affect the EEG signal more strongly than the task-relevant sources. These properties could make learning features in an end-to-end fashion fundamentally more difficult for EEG signals than for images. Thus, the existing ConvNets architectures from the field of computer vision need to be adapted for EEG input and the resulting decoding accuracies rigorously evaluated against more traditional feature extraction methods. For that purpose, a well-defined baseline is crucial, that is, a comparison against an implementation of a standard EEG decoding method validated on published results for that method. In light of this, in this study, we addressed two key questions:

What is the impact of ConvNet design choices (e.g., the overall network architecture or other design choices such as the type of nonlinearity used) on the decoding accuracies?

What is the impact of ConvNet training strategies (e.g., training on entire trials or crops within trials) on the decoding accuracies?

To address these questions, we created three ConvNets with different architectures, with the number of convolutional layers ranging from 2 layers in a “shallow” ConvNet over a 5-layer deep ConvNet up to a 31-layer residual network (ResNet). All architectures were adapted to the specific requirements imposed by the analysis of multi-channel EEG data

Computation overview for input-perturbation network-prediction correlation map.

Absolute input-perturbation network-prediction correlation frequency profile for the deep ConvNet.

Input-perturbation network-prediction correlation maps for the deep ConvNet. Correlation of class predictions and amplitude changes. Averaged over all subjects of the High-Gamma Dataset.

Page 57: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments and further literature background: Deep learning & Neuroscience #2: Spiking and Binary/Terniary Networks

SuperSpike: Supervised learning in multi-layer spiking neural networksFriedemann Zenke, Surya Ganguli Department of Applied Physics, Stanford University(Submitted on 31 May 2017)https://arxiv.org/abs/1705.11146 - cited by

BinaryConnect: Training Deep Neural Networks with binary weights during propagationsMatthieu Courbariaux, Yoshua Bengio, Jean-Pierre DavidAdvances in Neural Information Processing Systems 28 (NIPS 2015)http://papers.nips.cc/paper/5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations

https://github.com/MatthieuCourbariaux/BinaryConnect

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning MachinesEmre O. Neftci, Charles Augustine, Somnath Paul and Georgios Detorakis | Front Neurosci. 2017; 11: 324.doi: 10.3389/fnins.2017.00324

Ternary Weight NetworksFengfu Li, Bo Zhang, Bin Liu(Submitted on 16 May 2016 (v1), last revised 19 Nov 2016 (this version, v2))https://arxiv.org/abs/1605.04711

Ternary Residual NetworksAbhisek Kundu, Kunal Banerjee, Naveen Mellempudi, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey (Submitted on 15 Jul 2017)Parallel Computing Lab, Intel Labshttps://arxiv.org/abs/1707.04679

Temporally Efficient Deep Learning with SpikesPeter O'Connor, Efstratios Gavves, Max Welling(Submitted on 13 Jun 2017)https://arxiv.org/abs/1706.04159

“Intriguingly, this simple communication rule give rise to units that resemble biologically-inspired leaky integrate-and-fire neurons, and to a weight-update rule that is equivalent to a form of Spike-Timing Dependent Plasticity (STDP), a synaptic learning rule observed in the brain.”

Page 58: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments: Brain decoding and mapping in practice: Non-invasive brain “reading”

As if Facebook wasn’t already pervasive enough in everyday life, the company’s newly formed Building 8 “moon shot” factory is working on a device they say would let people type out words via a brain–computer interface (BCI). Marc Chevillet and his want to build a modified version of the functional near-infrared spectroscopy (fNIRS) systems used today for neuroimaging. Whereas conventional fNIRS systems work by bouncing light off a tissue sample and analyze all of the returning photons no matter how diffuse, Building 8’s prosthetic would detect only those photons that have scattered a small number of times—so-called quasi-ballistic photons—in order to provide the necessary spatial resolution.

https://www.scientificamerican.com/article/facebook-launches-moon-shot-effort-to-decode-speech-direct-from-the-brain/ ELON MUSK WANTS to merge the computer with the

human brain, build a "neural lace," create a "direct cortical interface," (company called Neuralink). Bryan Johnson, a Silicon Valley entrepreneur who previously sold a startup to PayPal for $800 million, is now building a company called Kernel. He says the company aims to build a new breed of "neural tools" in hardware and software—ultimately, in a techno-utopian way, allowing the brain to do things it has never done before. In other words, Musk and Johnson are applying the Silicon Valley playbook to neuroscience. They're talking about a technology they want to build well before they can actually build it.

Researchers could also develop genetic techniques to modify neurons so that machines can "read and write" to them from outside our bodies. Or they could develop nano-robots that we ingest into our bodies for the same purpose. All this, David Eagleman says, is more plausible than an implanted neural lace.

If you strip away all the grandiose language around these efforts from Johnson and Musk, however, Eagleman admires what they are doing, mainly because they are pumping money into research. "Because they are wealthy, they can set their sights on a big problem we're trying to solve, and they can work their way toward their problem," he says.

https://www.wired.com/2017/03/elon-musks-neural-lace-really-look-like/

g.BCIsys - g.tec's Brain-Computer Interface research environmentComplete BCI research system for EEG and EcoGhttp://www.gtec.at/Products/Complete-Solutions/g.BCIsys-Specs-Features

Brain monitoring takes a leap out of the labFirst-of-its-kind dry EEG system can be used for real-life applicationshttp://ucsdnews.ucsd.edu/pressrelease/brain_monitoring_takes_a_leap_out_of_the_lab

Bioengineers and cognitive scientists have developed the first portable, 64-channel wearable brain activity monitoring system that’s comparable to state-of-the-art equipment found in research laboratories

Mullen et al. (2015): Real-Time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG.

NIRxWith NIRSport, you can measure fNIRS from anywhere on the head, in any environment, concurrently with (nearly) any other modality.

http://nirx.net/nirsport/

Page 59: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Caswell Barry Comments: Brain decoding and mapping in practice: Non-invasive brain “writing”

Focused ultrasonic neuromodulationWilliam ‘Jamie’ Tyler lab https://www.tylerlab.com/ultrasonic-neuromodulation/

https://www.theguardian.com/science/2016/nov/07/us-military-successfully-tests-electrical-brain-stimulation-to-enhance-staff-skills

Writing in the journal Frontiers in Human

Neuroscience, they say that the technology,

known as transcranial direct current

stimulation (tDCS, or tACS), has a “profound

effect”.

Medical device developed by Nexstim achieves very promising results for stroke patient rehabilitation.

With Nexstim's devices and their use of three-dimensional structural images of the brain, it is possible to focus the stimulation accurately (via transcranial magnetic stimulation, TMS), on the order of millimetres, and thanks to the EEG recording, we immediately receive information about changes in the brain's electrical activity,’ says Professor Risto Ilmoniemi

http://ani.aalto.fi/en/current/news/2014-10-13-003/

http://asci.aalto.fi/en/science_factories/factory_report-coupling_to_the_dynamics_of_the_human_brain_with_tms-eeg/

Ilmoniemi, Risto J., Juha Virtanen, Jarmo Ruohonen, Jari Karhu, Hannu J. Aronen, and Toivo Katila. "Neuronal responses to magnetic stimulation reveal cortical reactivity and connectivity." Neuroreport 8, no. 16 (1997): 3537-3540.https://www.ncbi.nlm.nih.gov/pubmed/9427322Cited by 570 Articles

→ Methodology for combined TMS and EEGhttps://www.technologyreview.com/s/542176/a-shocking-way-to-fix-the-brain/

deep brain stimulation (DBS)

Page 60: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Suchi SariaCan Machines Spot Diseases Faster than Expert Humans?

Page 61: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Suchi SariaCan Machines Spot Diseases Faster than Expert Humans?

TREWS intelligent pre-emptive system for sepsis detection at John Hopkins university

Challenges with different sampling rate (e.g. infrequent creatinine levels vs. continuous heart rate / HRV monitoring)

Personalized medicine for predicting the individualized response (or identification of phenotypes at more granular level beyond diagnosis codes).

Petteri: With big data, and data mining, what will happen to diagnosis codes. Is a person with diabetes+glaucoma e.g. just the sum of them or something novel with different response to treatment?

Page 63: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Suchi SariaComments and further literature background: Irregular and missing samples #1

Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space. - http://arxiv.org/abs/1607.07519

TREATING MISSING DATA Various options

1. Zero-Imputation Set to zero when missing data

2. FORWARD-FILLING use previous values

3. MISSINGNESS Treat the missing value as a signal, as lack of a value measured e.g. in an ICU can carry information itself (Lipton et al. 2016)

4. BAYESIAN STATE-SPACE MODELING to fill the missing data (Luttinen et al. 2016, BayesPy package)

5. GENERATIVE MODELING Train the deep network to generate missing samples (Im et al. 2016, RNN GAN; see also github:sequence_gan)

Page 64: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Suchi SariaComments and further literature background: Irregular and missing samples #2

http://arxiv.org/abs/1511.02554

Po-Hsiang Chiu, George HripcsakDepartment of Biomedical Informatics, Columbia University, 622 W. 168th Street, New York, NY, USAhttps://doi.org/10.1016/j.jbi.2017.04.009

Learning statistical models of phenotypes using noisy labeled training data Vibhu Agarwal Tanya Podchiyska Juan M Banda Veena Goel Tiffany I LeungEvan P Minty Timothy E Sweeney Elsie Gyang Nigam H Shah

J Am Med Inform Assoc (2016) 23 (6): 1166-1173. DOI: https://doi.org/10.1093/jamia/ocw028

A Deep Learning And Novelty Detection Framework For Rapid Phenotyping In High-Content ScreeningC Sommer, R Hoefler, M Samwer, DW Gerlich - bioRxiv, 2017https://doi.org/10.1101/134627

“Supervised machine learning is a powerful and widely used method to analyze high-content screening data. Despite its accuracy, efficiency, and versatility, supervised machine learning has drawbacks, most notably its dependence on a priori knowledge of expected phenotypes and time-consuming classifier training. We provide a solution to these limitations with CellCognition Explorer, a generic novelty detection and deep learning framework. Application to several large-scale image data sets demonstrates that CellCognition Explorer enables discovery of rare phenotypes without user training, thus facilitating assay development for high-content screening.”

Data analysis workflows with CellCognition Explorer.

Self-learning of cell object features with CellCognition Deep Learning Module. (a) Schematic illustration of deep learning using an autoencoder with convolutional, pooling, and fully connected layers. (b) Phenotype scoring of 2,428 siRNAs (see Fig. 1a) by novelty detection and deep learning using CellCognition Explorer. Red bars indicate the distribution of the top-100-ranked siRNA hits identified by conventional supervised learning as in (Held et al., 2010). (c) Comparison of the top-100 screening hits determined either by novelty detection and deep learning of object features (blue) or supervised learning and conventional features (yellow) for 2,428 siRNAs as in (a, b). Scale bars, 10 m.μ

Page 65: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Suchi SariaComments and further literature background: Reinforcement learning for healthcare

Continuous State-Space Models for Optimal Sepsis Treatment - a Deep Reinforcement Learning Approach Aniruddh Raghu, Matthieu Komorowski, Leo Anthony Celi, Peter Szolovits, Marzyeh Ghassemi Computer Science and Artificial Intelligence Lab, MIT Cambridge, MA(Submitted on 23 May 2017) https://arxiv.org/abs/1705.08422

In this work, we propose a new approach to deduce optimal treatment policies for septic patients by using continuous state-space models and deep reinforcement learning [Deep-Q Learning (Mnih et al., 2015)]. Learning treatment policies over continuous spaces is important, because we retain more of the patient's physiological information. Our model is able to learn clinically interpretable treatment policies, similar in important aspects to the treatment policies of physicians. Evaluating our algorithm on past ICU patient data, we find that our model could reduce patient mortality in the hospital by up to 3.6% over observed clinical policies, from a baseline mortality of 13.7%. The learned treatment policies could be used to aid intensive care clinicians in medical decision making and improve the likelihood of patient survival.

We prefer RL for sepsis treatment over supervised learning, because the ground truth of “good” treatment strategy is unclear in medical literature (Marik, 2015). . Importantly, RL algorithms also allow us to infer optimal strategies from training examples that do not represent optimal behavior. RL is well-suited to identifying ideal septic treatment strategies, because clinicians deal with a sparse, time-delayed reward signal in septic patients, and optimal treatment strategies may differ.

Additional Medical Reinforcement Learning literature

Shortreed et al. Informing sequential clinical decision-making through reinforcement learning: an empirical study. Machine learning, 84(1-2):109–136, 2011. doi: 10.1007/s10994-010-5229-0 | Cited by 58

Nemati et al.. Optimal medication dosing from suboptimal clinical examples: A deep reinforcement learning approach. In 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), August 2016.doi: 10.1109/EMBC.2016.7591355 | Cited by 5

Komorowski et al. A Markov Decision Process to suggest optimal treatment of severe infections in intensive care. Poster In Neural Information Processing Systems Workshop on Machine Learning for Health, December 2016.http://www.nipsml4hc.ws/posters

Hochberg et al. A Reinforcement Learning System to Encourage Physical Activity in Diabetes Patients (2016) arXiv:1605.04070 [cs.CY]https://arxiv.org/abs/1605.04070

Akbari et al. A Holonic Multi-Agent System Approach to Differential Diagnosis. MATES 2017: Multiagent System Technologies pp 272-290.doi :10.1007/978-3-319-64798-2_17

Prasad et al. A reinforcement learning approach to weaning of mechanical ventilation in intensive care units. (2017) arXiv:1704.06300 [cs.AI]https://arxiv.org/abs/1704.06300

Ling et al. Diagnostic Inferencing via Improving Clinical Concept Extraction with Deep Reinforcement Learning: A Preliminary Studys .Proceedings of Machine Learning for Healthcare 2017 mucmd.org

OpenAI & Deepmind Learning from Human Preferences June 13 2017. https://blog.openai.com/deep-reinforcement-learning-from-human-preferences

OpenAI & Deepmind Learning to Model Other Minds September 14 2017. https://blog.openai.com/learning-to-model-other-minds/

Page 67: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

John FoxData science meets knowledge engineering: arguments for a hybrid approach #1

Francis Timothy (1937-1995) Pioneer of data science in the NHS

From low-level (e.g. deep learning for detecting lion in the image) to higher-level (so I see a lion, how should I react) semantic understanding of data

Knowledge-Data-Knowledge lifecycle. In other words feeding the “actionable insights” back to the existing knowledge, improving the future “actionable insights” rather than just creating huge “shallow data lakes” and go for the “deep data”

Repertoire of http://www.openclinical.org/. If for example Moorfields creates a good glacuoma care pathway,a hospital in USA or in Zimbabwe could implement to their context with less “from scratch” work.

Statistical analysis of data in the http://www.openclinical.org/ not very strong yet, but future incorporation of more intelligent systems for the knowledge graph is possible in the infrastructure.

Page 68: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

John FoxData science meets knowledge engineering: arguments for a hybrid approach #2

SummaryKnowledge engineeringKnowledge representationData scienceAnalytics (and machine learning)Hybrid statistical and symbolic learning

Finding nodes in a graphPeaks and trends in multivariable distributions suggest existence of nodes in the knowledge graph

Page 69: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

John Fox Comments and further literature background: Knowledge graph and high-level inference needed

Learning a Health Knowledge Graph from Electronic Medical RecordsMaya Rotmensch, Yoni Halpern, Abdulhakim Tlimat, Steven Horng & David SontagScientific Reports 7, Article number: 5994 (2017) doi: 10.1038/s41598-017-05778-z

Historically, the models used by diagnostic reasoning systems were specified manually, requiring tremendous amounts of expert time and effort. For example, it was estimated that about fifteen person-years were spent building the Internist-1/QMR knowledge base for internal medicine. However, the manual specification made these models extremely brittle and difficult to adapt to new diseases or clinical settings.

Automatic compilation of a graph relating diseases to the symptoms that they cause has the potential to significantly speed up the development of such diagnosis tools. Moreover, such graphs would provide value in and of themselves. For example, given that general-purpose web-search engines are among the most commonly consulted sources for medical information [White and Horvitz 2009; Hider et al. 2009], health panels such as those provided by Google using their health knowledge graph have a tremendous potential for impact [Ramaswami 2015].

EMR data is difficult to interpret for four main reasons: First, the text of physician and nursing notes is less formal than that of traditional textbooks, making it difficult to consistently identify disease and symptom mentions. Second, textbooks and journals often present simplified cases that relay only the most typical symptoms, to promote learning. EMR data presents real patients with all of the comorbidities, confounding factors, and nuances that make them individuals. Third, unlike textbooks that state the relationships between diseases and symptoms in a declarative manner, the associations between diseases and symptoms in the EMR are statistical, making it easy to confuse correlation with causation. Finally, the manner in which observations are recorded in the EMR is filtered through the decision-making process of the treating physician. Information deemed irrelevant may be omitted or not pursued, leading to information missing not at random [Weiskopf et al. 2013].

Concept extraction pipeline. Non-negated concepts and ICD-9 diagnosis codes are

extracted from Emergency Department electronic medical records. Concepts, codes

and concept aliases are mapped to unique IDs, which in turn populate a co-occurrence

matrix of size (Concepts) × (Patients).

Workflow of modeling the relationship between diseases and symptoms and knowledge graph construction, for each of our 3 models (naive Bayes, logistic regression and noisy OR).

Page 70: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Edward PerelloDESKGEN AI for CRISPR Genome Editing #1

Page 71: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Edward PerelloDESKGEN AI for CRISPR Genome Editing #1

Denoising and normalization importantLinear model sufficient to predict guide performanceHuman genome better curated than mouse, thus human model performed better

Developers actually found Docker+Kubernetes workflow too heavy to follow, and pure Python code in Conda environment was preferred at Deskgen AI

Deploying deep learning models with Docker and Kubernetes

https://www.slideshare.net/PetteriTeikariPhD/deploying-deep-learning-models-with-docker-and-kubernetes

Recurring CRISPR problems: Humans get tired of choosing guides Weights are carefully controlled by humans Actual genome sequence often ignored Human concentrate on few “winning” guides

from recent “sexy papers”

In this review, we underline the current progress and the future potential of the CRISPR-Cas9 system towards biomedical, therapeutic, industrial, and biotechnological applications.

https://doi.org/10.1016/j.gene.2016.11.008

Mapping biology to computer science

http://doi.org/10.1126/science.1231143 → Cited by 2354 articles

Page 72: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Edward PerelloDESKGEN AI for CRISPR Genome Editing #2

Machine Learning EnvJupyter Notebooks + PyData Stack + SciKit Learn / Tensorflow

In-silico of target genomesalt-minion, BioInfo (Cython), postgresql (psycopg2), dregistry (Tornado), GCStorage (gcloud sdk), Omic Tools (Click), genome_fs (Cython), dgcli (Click), Browser (Vue.js)

SaltStack Control Layer orchestrates instance groups in both development and production environments

Salt in 10 Minutes - SaltStack DocumentationSaltconf 2016: Salt stack transport and concurrency

https://www.slideshare.net/socaldevopsusergroup/getting-started-with-salt-stack

ageron/handson-mlA series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in python using Scikit-Learn and TensorFlow.

High-Performance Distributed Tensorflow Training and Serving - PyData London May 6, 2017by Chris Fregly https://youtu.be/TuGszWtR0ss

Page 74: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Anna SkayaImproving the logistics behind developing machine learning models for personalized recommendations

Basepaws CatKit “23andme” for cats Embark would be similar for dogs

Hope that the gathered feline gene data could be used to model personalized medicine for humans as well given that there are less privacy and ethics concerning with feline data

Deep learning in “proprietary metric space” via multi-agent model for phenotype.

https://youtu.be/zCsCf5AamhY

Minella, A. L., et al. "Differential targeting of feline photoreceptors by recombinant adeno-associated viral vectors: implications for preclinical gene therapy trials." Gene therapy 21.10 (2014): 913. doi: 10.1038/gt.2014.65

Freeman, Lisa M., et al. "Feline Hypertrophic Cardiomyopathy: A Spontaneous Large Animal Model of Human HCM." Cardiology Research 8.4 (2017): 139. doi: 10.14740/cr578w

Listings of board-certified veterinary cardiologists can be found online (http://find.vetspecialists.com)

DeepMetabolism: A Deep Learning System to Predict Phenotype from Genome Sequencing Weihua Guo, You Xu, Xueyang Feng (Submitted on 8 May 2017) https://arxiv.org/abs/1705.03094

We envision DeepMetabolism to bridge the gap between genotype and phenotype and to serve as a springboard for applications in synthetic biology and precision medicine.

DeepChem aims to provide a high quality open-source toolchain that democratizes the use of deep-learning in drug discovery, materials science, quantum chemistry, and biology.https://github.com/deepchem/deepchem

DeepChem PublicationsComputational Modeling of -secretase 1 (BACE-1) Inhibitors using Ligand Based ApproachesβLow Data Drug Discovery with One-Shot LearningMoleculeNet: A Benchmark for Molecular Machine LearningAtomic Convolutional Networks for Predicting Protein-Ligand Binding Affinity

Page 75: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Anna Skaya Comments and further literature background

Actual utility of the current kit was limited to “fun gimmicks” for the cat owners. Actual sequencing still not accurate enough to be used by veterinarians and breeders. The phenotyping is done also by “noisy” pet owners, and not be qualified practicioners

https://www.crunchbase.com/organization/embark-veterinary

Dog sequencing has attracted a

lot more capital already”

Similar “entry level” sequencing offered also for horses which is quite far still from personalized medicine approach to maximize their performance as race horses for examplehttps://www.easydna.co.uk/horse-dna-test/

Page 77: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Jose Miguel LobatoAdvances in deep generative models of chemical compounds

First attempt to Generative model for molecules via generative variational autoencoder (GVAE)Note: only generating the molecule “grammar” and the whole 3D structure

Bayesian optimization used then to pull out new molecules from the learned latent space with the hope of getting as “drug-like” molecules as possible.

Grammar constrain instead of modeling the molecules using the SMILES language, the authors defined which structures can follow what structurese.g. in f(x) = y, if we would generate f(+) = y, it would not be syntactically

correct

Grammar Variational AutoencoderMatt J. Kusner, Brooks Paige, José Miguel Hernández-Lobato (Submitted on 6 Mar 2017) | https://arxiv.org/abs/1703.01925https://github.com/mkusner/grammarVAE

Syntactic vs. semantic validity. It is important to note that the grammar encodes syntactically valid molecules but not necessarily semantically valid molecules.

Computational methods for the prediction of ‘drug-likeness’ David E Clark, Stephen D Pickett

Drug Discovery Today Volume 5, Issue 2, 1 February 2000, Pages 49-58https://doi.org/10.1016/S1359-6446(99)01451-8

Assessing drug-likeness – what are we missing? Giulio Vistoli, Alessandro Pedretti, Bernard Testa

Drug Discovery Today Volume 13, Issues 7–8, April 2008, Pages 285-294https://doi.org/10.1016/j.drudis.2007.11.007

The Many Roles of Computation in Drug Discovery William L. Jorgensen

8cience 19 Mar 2004: Vol. 303, Issue 5665, pp. 1813-1818http://doi.org/10.1126/science.1096361

Automatic chemical design using a data-driven continuous representation of molecules Rafael Gómez-Bombarelli, David Duvenaud, José Miguel Hernández-Lobato, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel,Ryan P. Adams, Alán Aspuru-Guzik(Submitted on 7 Oct 2016 (v1), last revised 6 Jan 2017 (this version, v2))https://arxiv.org/abs/1610.02415

Tutorial on Variational AutoencodersCarl Doersch

(Submitted on 19 Jun 2016 (v1), last revised 13 Aug 2016 (this version, v2)) https://arxiv.org/abs/1606.05908

Kadurin, Artur, et al. "druGAN: An Advanced Generative Adversarial Autoencoder Model for de Novo Generation of New Molecules with

Desired Molecular Properties in Silico." Molecular Pharmaceutics (2017) http://doi.org/10.1021/acs.molpharmaceut.7b00346

Nash, Charlie, and Chris KI Williams. "The shape variational autoencoder: A deep generative model of part segmented 3D objects."‐ Computer

Graphics Forum. Vol. 36. No. 5. 2017. http://doi.org/10.1111/cgf.13240

Page 78: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Jose Miguel Lobato Comments and further literature background: Significance of correct grammar without generative 3D structure?

AI-powered drug discovery captures pharma interestEric Smalley Nature Biotechnology 35, 604–605 (2017) doi: 10.1038/nbt0717-604

A drug-hunting deal inked last month, between Numerate, of San Bruno, California, and Takeda Pharmaceutical to use Numerate's artificial intelligence (AI) suite to discover small-molecule therapies for oncology, gastroenterology and central nervous system disorders

Creating New Drugs, Faster: How AI Promises to Speed Drug DevelopmentJamie Beckett, NVIDIA, Feb 8, 2017https://blogs.nvidia.com/blog/2017/02/08/ai-drug-discovery/

BenevolentBio, the life sciences arm of London’s BenevolentAI, aims to reinvent drug discovery by using deep learning and natural language processing to understand and analyze vast quantities of bioscience information — patents, genomic data and the more than 10,000 publications uploaded daily across all biomedical journals and databases.

“Humans alone cannot process all this information to advance scientific research,” CEO Jackie Hunter said.

Alán Aspuru-Guzik, a Harvard University professor of chemistry and chemical biology, is also seeking to build treatments from the ground up using a deep learning neural network for what he calls “inverse molecular design (https://arxiv.org/abs/1610.02415).”

From machine learning to deep learning: progress in machine intelligence for rational drug discoveryDrug Discovery Today (4 September 2017)https://doi.org/10.1016/j.drudis.2017.08.010

Deep learning methods are new trend in modern drug discovery under big data era.

Although Linear Discriminant Analysis (LDA) is a simple approach, the combination of LDA and novel descriptors is still considered a powerful modeling method. For example, Marrero et al. (2015) used a LDA algorithm combined with topologic, 3D-chiral, topographic, and geometric descriptors to predict the antifungal activity of drugs and yielded a higher accuracy compared with other nonlinear approaches.

Entrepreneur First Demo Day Live Stream #ef8https://youtu.be/w-3hvmQLTJg?t=1h15m38s

GTN Generative tensorial networks with Noor Shaker and Vid Stojevic

Page 79: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Navin RamachandranIssues to consider in adopting AI?

Page 81: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Navin RamachandranComments and further literature background

Of course there is a danger for biased machine learning system, which are vulnerable for hacking via adversarial examples for example.

“How much of chimpanzee values were taught to humans?”

FUTURE TENSETHE CITIZEN’S GUIDE TO THE FUTURE.APRIL 20 2016 FROM SLATE By Adam Elkus

But Russell’s biggest problem lies in the very much “values”-based question of whose values ought to determine the values of the machine. One does not imagine too much overlap between hard-right Donald Trump supporters and hard-left Bernie Sanders supporters on some key social and political questions, for example. And the other (artificial) elephant in the room is the question of what gives Western, well-off, white male cis-gender scientists such as Russell the right to determine how the machine encodes and develops human values, and whether or not everyone ought to have a say in determining the way that Russell’s hypothetical A.I. makes tradeoffs.

It is unlikely that social scientists like Collins and others can offer any definitive insights about these questions. However, it is even less likely that Russell and his peers can avoid them altogether through technical engineering efforts. If only the problem were indeed just how to engineer a system to respect human values; that would make it very easy. The harder problem is the thorny question of which humans ought to have the social, political, and economic power to make A.I. obey their values, and no amount of data-driven algorithms is going to solve it.

“What do the ‘machines’ want to learn without setting the targets?”

Jürgen Schmidhuber: Formal Theory of Fun and Creativity and Intrinsic Motivation Publication date 2014-08-15Topics theoretical neuroscience, machine learning

https://archive.org/details/Redwood_Center_2014_08_15_Jurgen_Schmidhuberhttp://www.idsia.ch/~juergen/creativity.htmhttp://www.idsia.ch/~juergen/interest.html

“Aggregation of data to big 5 players (Google, Facebook, Amazon, Apple, Microsoft)”

https://www.economist.com/news/briefing/21721634-how-it-shaping-up-data-giving-rise-new-economy

Page 82: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

PanelWhat does cultural and societal engagement with AI look like?

https://www.bioscience.ai/programme:

The use of artificial intelligence in bioscience and health sectors raises many societal and ethical questions for scientists, healthcare professionals, and policy-makers. What is the society we are trying to create? How will we know if/when we achieve it? However, as well as understanding these issues ourselves, we need to engage the public in a much wider conversation. This session will discuss:

Understanding public attitudes to science, and to specific issues such as AI

Creating new narratives around emerging technologies, that the public feel they can question and relate to

Clio Heslop (Moderator)Cultural Partnerships Manager at the British Science Association

Margaret A. BodenResearch Professor of Cognitive Science at the University of Sussex

Genevieve Smith-NunesEntrepreneur and Lecturer at Roehampton University School of Education

Maxine MackintoshCo-founder One Health Tech

Page 83: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

PanelAI hype #1: The ‘bullshit machines’ such as Babylon Health and IBM Watson leading the way

Three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer.

“Watson for Oncology is in their toddler stage, and we have to wait and actively engage, hopefully to help them grow healthy,” said Dr. Taewoo Kang, a South Korean cancer specialist who has used the product.

https://www.statnews.com/2017/09/05/watson-ibm-cancer/

https://techcrunch.com/2017/04/25/babylon-health-raises-further-60m-to-continue-building-out-ai-doctor-app/

Riding on the hype AI selling solutions that are quite narrow AI replacing only simple low-level steps rather than focusing on high-level “multi.modal reasoning” which are beyond human capabilities.

End of clinician “silver-bullet” / “push-of-a-button” / “one-measure-sufficient-for-your disease” approach is clear, but the deep learning (not really proper AI) are not even yet there properly using all electronic health records due to ethical and privacy issues.

Page 84: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelRobot Healthcare, and the need for ‘human touch’ #1

NEWS | HuffPost‘Patients Want Human Doctors, Not Robots’, Medics Tell Jeremy Hunt ‘Hunt must stop using every opportunity to dodge real investment’.

Doctors have aired their disagreement with Health Secretary Jeremy Hunt after he suggested robots could replace NHS medics by diagnosing diseases within a decade. Speaking at a conference in Manchester, Hunt said of the NHS in 2028: “We may well not be going to doctors for a diagnosis, we might be going to computers instead.” But GPs and experts have said Hunt’s embrace of virtual diagnoses betrays patients’ desire for “human interaction”, which machines will likely find difficult to replicate.

NEWS | HuffPostIllnesses Diagnosed By Computer In Little More Than 10 Years, Says Jeremy Hunt

NHS patients could one day be diagnosed by computers instead of doctors, health leaders have said. In a decade’s time, patients may even be diagnosed with conditions before they are symptomatic as full genome sequencing becomes accessible to the masses, Health Secretary Jeremy Hunt said. In the nearer future, patients will be able to make their wishes about organ donation and end-of-life care known through simple apps, he said.

HealthcareVinod Khosla: Machines will replace 80 percent of doctorsBy LIAT CLARK | 04 Sep 2012http://www.wired.co.uk/article/doctors-replaced-with-machines

Machines will replace 80 percent of doctors in a healthcare future that will be driven by entrepreneurs, not medical professionals, according to Sun Microsystems co-founder Vinod Khosla.

Khosla, who wrote an article entitled Do We Need Doctors Or Algorithms? earlier this year, made the controversial remarks at the Health Innovation Summit in San Francisco, hosted by seed accelerator Rock Health.

Davis Liu, MD‘Patients Want Human Doctors, Not Robots’, Medics Tell Jeremy Hunt http://thehealthcareblog.com/blog/2012/08/31/vinod-khosla-technology-will-replace-80-percent-of-docs/

Was Kholsa serious that technology could make health care better by utilizing large data sets and computational power to clinch better and more precise diagnoses? Was he simply being provocative to hear other points of view to learn even more? Like many others in the conference, he believes that giving consumers more opportunities, access, and choice to information about themselves and their bodies would empower them to do the right thing.

Perhaps Kholsa’s call to action was simply an entrepreneurial mindset, but simply ignoring those who have chosen a field to improve and safe lives and who meet humanity everyday on the front-lines is problematic and dangerous. There are some things that may never be codified or driven into algorthims. Call it a doctor’s experience, intuition, and therapeutic touch and listening. If start-ups can clear the obstacles and restore the timeless doctor-patient relationship and human connection, then perhaps the future of health care is bright after all.

UK

USA

Page 85: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelRobot Healthcare, and the need for ‘human touch’ #2: People for sure like the human interface,

but do they actually want the biased clinician to make the calls for their health?

Posted by Peach Perera on 19 Nov, 2016https://medicfootprints.org/highlights-giant-health-event-2016/

“The increasing use of big data was a hot topic in general, and was touched on by many. Shruti Malani Krishnan, co-founder of Powr of You, gave a fascinating talk about actually paying consumers for using their anonymised personal data [”data donation”]. She explained how this could be used to enhance clinical trials by allowing researchers to see how patient behaviour affects their response to treatment, for example.”

Shruti presented a survey where people were most willing to donate (when asked) their medical data, and the least willing to share their social media data.Which was of course an interesting contradiction between the experiencing self and the narrative self in "Hararian" terms.

http://dx.doi.org/10.1038/nbt.3341

Study Suggests Medical Errors Now Third Leading Cause of Death in the U.S.Release Date: May 3, 2016http://www.hopkinsmedicine.org/news/media/releases/study_suggests_medical_errors_now_third_leading_cause_of_death_in_the_us

10 percent of all U.S. deaths are now due to medical error.

Narrative reviewThe incidence of diagnostic error in medicine Mark L GraberBMJ Quality & Safety Volume 22, Issue Suppl 2http://dx.doi.org/10.1136/bmjqs-2012-001615

Top alleged medical error named in claims where the patient expired (Physician Insurers Association of America (PIAA) Data Sharing Project Data 1985–2009, Physician Insurer, Vol 55, 2010).

The economics of health care quality and medical errorsAndel C, Davidow SL, Hollander M, Moreno DAJ Health Care Finance. 2012 Fall;39(1):39-50.https://www.ncbi.nlm.nih.gov/pubmed/23155743

The economics of patient safetyL Slawomirski, A Auraaen, N Klazinga – 2017http://www.oecd.org/els/health-systems/The-economics-of-patient-safety-March-2017.pdf

Patient safety and quality of care

vs

Status quo for clinicians ?

Page 86: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelRobot Healthcare, and the need for ‘human touch’ #2: “Confusion zone”

When hearing phrases (e.g. from a clinician) like “computer will never”, “AI will never”, “machine will never” replace (e.g. prof Margaret A. Boden in the panel), it becomes obvious that the speaker does not fully understand the nature of (generic) AI systems which can be improved at a much faster rate than Homo Sapiens can be. The systems in 2,5,10 years from now can potentially be a lot better than the current ones (see e.g. singularity predictions by Ray Kurzweil and Jürgen Schmidhuber).

Marketing departments (e.g. IBM Watson Health) are typically also offering similarly extravagant claims what their AI systems (in reality still narrow AI analogous to human sensory processing, and not higher cognitive functions… yet). Their statements are typically driven by quick profits and the overall hype, in worst case leading to another AI winter.

2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years

Juergen Schmidhuber(Submitted on 31 Aug 2007)https://arxiv.org/abs/0708.4311

It looks as if history itself will converge in a historic singularity or Omega point

around 2040Ω (the term historic singularity is apparently due to Stanislaw Ulam (1950s) and was popularized by Vernor Vinge in the 1990s).

20 Jan 2012: TEDx Lausanne: Talk (12:47) onconverging history and Omega Point. SeeTranscript. (All TEDx talks.)

When creative machines overtake man: Jürgen Schmidhuber at TEDxLausanne

https://www.meetup.com/London-Machine-Learning-Meetup/events/233763669/

Page 87: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelNatural stupidity vs. Artificial intelligence

Computational Foundations of Natural IntelligenceMarcel van GervenDonders Institute for Brain, Cognition and BehaviourRadboud University, Nijmegen, the Netherlandshttp://dx.doi.org/10.1101/166785

I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence.

Homo Deus: A Brief History of Tomorrowby Yuval Noah Harari

use of artific

“shift to a posthuman future”(Pg. 319):

“The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. The current scientific answer to this pipe dream can be summarized in three simple principles:

1) Organisms are algorithms. Every animal – including Homo Sapiens – is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution

2) Algoritmic calculations are not affected by the materials from which you build the calculator. Whether you build an abacus from wood, iron or plastic, two beads plus two beads equals four beads.

3) Hence there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?

Waking Up With Sam Harris #68 - Reality and the Imagination (with Yuval Noah Harari) https://www.youtube.com/watch?v=5jgALWLc-JU

Sam Harris speaks with Yuval Noah Harari about meditation, the need for stories, the power of technology to erase the boundary between fact and fiction, wealth inequality, the problem of finding meaning in a world without work, religion as a virtual reality game, the difference between pain and suffering, the future of globalism, and other topics.

Page 88: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelHuman bias vs. Machine bias Human bias and doctor ego still a larger problem?

Cognitive biases associated with medical decisions: a systematic reviewGustavo Saposnik, Donald Redelmeier, Christian C. Ruff and Philippe N. ToblerBMC Medical Informatics and Decision MakingB 2016 16:138https://doi.org/10.1186/s12911-016-0377-1

Overconfidence, the anchoring effect, information and

Semantics derived automatically from language corpora contain human-like biasesAylin Caliskan, Joanna J. Bryson, Arvind NarayananScience 14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186http://dx.doi.org/10.1126/science.aal4230

Page 89: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelHuman bias vs. Machine bias Prevalent machine biases reflecting human biasesIntelligent Machines

Biased Algorithms Are Everywhere, and No One Seems to Care The big companies developing them show no interest in fixing the problemby Will Knight July 12, 2017https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/

This week a group of researchers, together with the American Civil Liberties Union, launched an effort to identify and highlight algorithmic bias. The AI Now initiative was announced at an event held at MIT to discuss what many experts see as a growing challenge. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan. Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in many contexts, says people are often too willing to trust in mathematical models because they believe it will remove human bias. “[Algorithms] replace human processes, but they’re not held to the same standards,” she says. “People trust them too much.”

Society-in-the-loop: programming the algorithmic social contractIyad Rahwan, The Media Lab, Massachusetts Institute of Technology, Cambridge, USAEthics and Information Technology pp 1–10 (2017)https://doi.org/10.1007/s10676-017-9430-8

An AI stereotype catcherAnthony G. Greenwald Department of Psychology, University of Washington, Seattle, WA,

Science 14 Apr 2017: Vol. 356, Issue 6334, pp. 133-134http://dx.doi.org/10.1126/science.aan0649

Considering that AI software might unintentionally perpetrate gender discrimination, Bolukbasi et al. (2016) suggested computational methods to gender-debias AI text analyses. Caliskan et al. (2017)’s findings further encourage pursuit of this challenging task. Computational debiasing necessarily entails some loss of meaning, and gender is just one dimension on which AI text analyses might be debiased. How much useful meaning may disappear in the process of debiasing simultaneously for the legally protected classes of race, skin color, religion, national origin, age, gender, pregnancy, family status, and disability status? Hopefully, the task of debiasing AI judgments will be more tractable than the as-yet-unsolved task of debiasing human judgments.

Page 90: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelConsciousness | The most interesting problem in Biology?- Relation to free will, and detailed analysis still well beyond the scope of this slide presentation

http://dx.doi.org/10.1126/science.282.5395.1846 | Cited by 1371

https://doi.org/10.1016/j.tics.2017.01.004 | Cited by 1

Authors: Crick, Francis; Clark, J.Source: Journal of Consciousness Studies, Volume 1, Number 1, 1994, pp. 10-16(7)http://doi.apa.org/psycinfo/1994-97085-000 | Cited by 3604

Review of Philosophy and Psychology (2017) Pp 1–41https://doi.org/10.1007/s13164-017-0351-6

Mapping cognitive structure onto the landscape of philosophical debate: An empirical framework with relevance to problems of consciousness, free will and ethics

https://doi.org/10.1016/j.pbiomolbio.2017.06.018

Free will in Bayesian and inverse Bayesian inference-driven endo-consciousness

Consciousness: A User's Guide by Adam Zeman (2002)

Page 91: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelArtificial Consciousness | Ambiguous as definition #1Reggia (2013): Following earlier suggestions that a more precise definition, given our currently inadequate scientific understanding of consciousness, is best left to the future [Crick, 1994; Searle, 2004]. Some recent discussions of more formal definitions and the philosophical difficulties that these raise can be found, for example, in [Block, 1995Cited by 2271; Sloman, 2010

Cited by 32; Zeman, 2002Cited by 143].

A sketch of some theoretical positions concerning the nature of conscious mind. It is probably the case that the vast majority of individuals investigating the philosophical and scientific basis of consciousness today, including those developing computer models of consciousness, are functionalists [Churchland, 1984; Eccles, 1994; Searle, 2004].

There is a very important distinction that is often made in the consciousness literature; this distinction arises due to the ambiguity of the word “consciousness”. Specifically, there is a crucial difference between what are referred to as the easy problem and the hard problem of consciousness [Chalmers, 1996, 2007].

The easy problem refers to understanding the information processing aspects of conscious mind: cognitive control of behavior, the ability to integrate information, focusing of attention, and the ability of a system to access and/or report its internal states. Calling these problems “easy” does not mean that solving them will be easy, but that we believe that doing so will ultimately be possible in terms of computational and neurobiological mechanisms within the framework of functionalism.

In contrast, the hard problem of consciousness refers to the subjective experience associated with being conscious. The term qualia (felt qualities) is often used for our subjective experiences, such as the sensation of redness we experience during a sunset, the pain of a toothache, or the smell of a rose. The mystery here (the “hard” nature of the problem) is why it feels like anything at all to be conscious [Nagel, 1974 Cited by 7170]. Even if science ultimately explains the information processing mechanisms for all of the “easy” problems of consciousness, this will not explain why these mechanisms or functions do not occur without subjective awareness, but are instead accompanied by subjective experiences or qualia. In other words, there is an “explanatory gap” between a successful functional/computational account of consciousness and the subjective experiences that accompany it [Levine, 1983

Cited by 1595]. The term phenomenal consciousness is often used to emphasize that one is referring specifically to the phenomena of subjective experience (qualia). This can be contrasted with the term access consciousness which refers to the availability of information for conscious processing, a decidedly functionalist concept.

Chalmers, David; Journal of Consciousness Studies, Volume 17, Numbers 9-10, 2010, pp. 7-65(59)https://philpapers.org/rec/CHATSA | Cited by 260

“What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the 'singularity'.”

AI AND CONSCIOUSNESS: THEORETICAL FOUNDATIONS AND CURRENT APPROACHESPapers from the 2007 AAAI Fall SymposiumAntonio Chella and Riccardo Manzotti, Program Cochairshttps://www.aaai.org/Library/Symposia/Fall/fs07-01.php

Replication of the Hard Problem of Consciousness in AI and Bio-AI: An Early Conceptual Framework / 24Nicholas Boltuc, Piotr Boltuc

A Cognitive Approach to Robot Self-Consciousness / 30Antonio Chella, Salvatore Gaglio

Must Machines Be Zombies? Internal Simulation as a Mechanism for Machine Consciousness / 78Germund Hesslow, Dan-Anders Jirenhed

Demonstrating the Benefit of Computational Consciousness / 109Lee McCauley

Steps Towards Artificial Consciousness: A Robot's Knowledge of Its Own Body / 118Domenico Parisi, Marco Mirolli

Conscious Machines: Memory, Melody and Muscular Imagination / 142Susan A. J. Stuarthttp://nautil.us/issue/47/consciousness/we-need-conscious-robots

We need conscious robots, by Ryota Kanai

Page 92: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #1

The rise of machine consciousness: Studying consciousness with computational modelsGood review by James A.ReggiaNeural Networks Volume 44, August 2013, Pages 112-131https://doi.org/10.1016/j.neunet.2013.03.011

Expanding perspectives on cognition in humans, animals, and machinesAlex Gomez-Marin, Zachary F MainenCurrent Opinion in Neurobiology Volume 37, April 2016, Pages 85-91https://doi.org/10.1016/j.conb.2016.01.011

• Neuroscience is attacking the problem of cognition with increasing vigor and rigor.• Cognition is poorly defined and serves as a catch-all for any complex brain function.• Diverse approaches from control theory to neurophenomenology are all relevant.• A pluralistic, open-ended approach to cognition is advised.

Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness.

Two examples of some basic metacognitive neural architectures. Boxes indicate network layers. Broad gray arrows indicate all-to-all connections between layers whose weights change during learning, while thin black arrows indicate localized non-adaptive connections. a. The first-order network on the left is a feedforward error backpropagation network, often used for pattern classification tasks. The second-order network on the right has no causal influence on the first-order network’s performance; instead, it monitors the first-order network’s hidden layer representation and makes a high (H) or low (L) “wager” as to the correctness of the first-order network’s output for each input pattern that it receives. The correctness of this wager over time reflects the extent to which the second-order network has learned, also via error backpropagation, a meta-representation of the first-order network’s representation. b. A metacognitive architecture used to model experimental findings obtained from a human subject who had the clinical syndrome known as blindsight.

… Quantum cognition is an offshoot of decision theory that borrows the quantum probability formalism based on the Dirac and Von Neumann axioms. Quantum decision models can explain several anomalous effects in sequential decision-making [44]. A quantum diffusion-to-bound model was introduced [45], which is a promising start toward other cognitive mechanisms. In the background, as either a beacon or a specter, is whether it is ultimately tied to quantum mechanics or quantum computing, as infamously proposed by Penrose [46].

Petteri: “quantum jargon everything to sound fancier?, e.g. Deepak Chopra”

Finally, we note two important and relatively new approaches that assert that the data relevant to cognition do not lie solely within the realm of neuroscience facts as traditionally construed. First, ‘neurophenomenology’, introduced by Francisco Varela, aims to include introspection as a source of scientific evidence [63]. Neurophenomenology is based on first-person (subjective) moment-to-moment accounts coupled with classical quantification procedures such as neuroimaging. Together, these are proposed to provide mutual constraints to tighten the link between neurodynamics and conscious experience and, in particular, to offer new insights in the study of variability and spontaneity in cognition [64 ]. Second, ‘embodied cognition’, also championed by Varela, centers on the recognition that cognition is embedded in a context beyond the brain itself, including the body as well as broader environmental, biological and cultural systems [65]. Both neurophenomenology and embodied cognition illustrate the nascent fruits of substantive dialogue between philosophy and neuroscience that we hope will continue to instill new perspectives in the years to come.

Page 93: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #2

Artificial consciousness and the consciousness-attention dissociationHarry Haroutioun Haladjiana, Carlos MontemayorConsciousness and Cognition Volume 45, October 2016, Pages 210-225https://doi.org/10.1016/j.concog.2016.08.011

What are the computational correlates of consciousness?James A. Reggia, Garrett Katz, Di-Wei HuangBiologically Inspired Cognitive Architectures Volume 17, July 2016, Pages 101–113https://doi.org/10.1016/j.bica.2016.07.009

Some think that the so-called ‘singularity’ (the moment in which AI surpasses human intelligence) is near. Others say there is now a Cambrian explosion in robotics (Pratt, 2015). Indeed, there is a surge in AI research across the board, looking for breakthroughs to model not only specific forms of intelligence through brute-force search heuristics but by truly reproducing human intelligent features, including the capacity for emotional intelligence and learning. Graziano (2015; Build-a-brain), for example, has recently claimed that artificial consciousness may simply be an engineering problem— once we overcome some technical challenges we will be able to see consciousness in AI. Even without accomplishing this lofty goal of machine sentience, it still is easy to see many examples of human-like rational intelligence implemented in computers with the aim of completing tasks more efficiently.

… Along with these advances in AI, the field has moved beyond the design of intelligence in computing systems and into the realm of reproducing more complex human abilities like perception and emotion (Li, 2014). These more complex human abilities do in fact modulate activities related to intelligence in humans, so it is only logical that computing advancements would lead to attempts to model and integrate more complex systems into AI. Indeed, one can find many similarities between advances in technology and biological evolution (Wagner & Rosen, 2014).

Another aspect of AI that is lacking concerns endogenous, spontaneous motivation. In living organisms, there are various motivating factors that shape the evolution of the organism, primarily factors surrounding the sustenance and propagating of life—a basic drive that underlies the evolution of species. From the drive to seek food, to basic fear responses, to social cooperation, living organisms (particularly more complex ones like humans) have adapted in ways that generally facilitate the continuation of a species. Indeed, an important aspect of human interactions is the level of empathic motivation that occurs with different degrees of intensity across social groups, which also cannot be simulated by machines (Winczewski, Bowen, & Collins, 2016). Feelings, being part of a core emotional neural system that must have evolved before higher forms of conceptual cognition, are not going to be reproduced by any kind of purely informational search algorithm or even a large collection of algorithms. At best, feelings can only be simulated, as seen in research that presents computational accounts of emotions

… it remains unclear at present whether any of the currently hypothesized candidates for neurocomputational correlates of consciousness will ultimately prove to be satisfactory without further qualifications/refinements. This is not because they fail to capture important aspects of conscious information processing, but primarily because similar computational mechanisms have not yet been proven to be absent in unconscious information processing situations. These computational mechanisms are thus not yet specifically identifiable only with conscious cognitive activities. Future work is needed to resolve such issues, and to further consider other types of potential computational correlates (quantum computing, massively parallel symbol processing, etc.).

Page 94: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #3

Exploring the Computational Explanatory GapJames A. Reggia, Di-Wei Huang and Garrett KatzPhilosophies 2017, 2(1), 5; doi: 10.3390/philosophies2010005

(Phenomenal) Consciousness is very poorly understood at present, and many people have argued that computational studies do not have a significant role to play in understanding it, or that there is no possibility of an artificial consciousness. For example, some philosophers have argued that phenomenal machine consciousness will never be possible for a variety of reasons: the non-organic nature of machines [Schlagel 1999], it would imply panpsychism [Bishop 2009], the absence of a formal definition of consciousness [Bringsjord 2007], or the general insufficiency of computation to underpin consciousness [Manzotti 2012, Piper 2012]. More generally, it has been argued that the objective methods of science cannot shed light on phenomenal consciousness due to its subjective nature [McGinn 2004], making computational investigations irrelevant.

The philosophical and computational explanatory gaps address different problems. The philosophical explanatory gap (top dashed line labeled PEG) refers to our inability to understand how subjective mental experiences can arise mechanistically from a material/physical substrate, the brain. In contrast, the computational explanatory gap (bottom dashed line labeled CEG) refers to our inability to understand how high-level cognitive algorithms and any macroscopic correspondences that they may have in the brain can be accounted for by the low-level computational mechanisms supported by the microscopic neurocomputational processes and algorithms of neural circuitry, artificial or otherwise.

Construction Kits for Biological EvolutionAaron Sloman, School of Computer Science, University of Birmingham, UKThe Incomputable pp 237-292 Part of the Theory and Applications of Computability book series (THEOAPPLCOM)https://doi.org/10.1007/978-3-319-43669-2_14 | [PDF]

A construction kit gives rise to very different individuals if the genome interacts with the environment in increasingly complex ways during development. Precocial species use only the downward routes on the left, producing preconfigured competences. Competences of altricial species, using staggered development, may be far more varied. Results of using earlier competences interact with the genome, producing meta-configured competences on the right.

This is part of the Turing-inspired Meta-Morphogenesis project, which aims to identify transitions in information processing since the earliest protoorganisms, in order to provide new understanding of varieties of biological intelligence, including the mathematical intelligence that produced Euclid’s Elements. (Explaining evolution of mathematicians is much harder than explaining evolution of consciousness!) Transitions depend on “construction kits”, including the initial “Fundamental Construction Kit” (FCK) based on physics, and Derived Construction Kits (DCKs) produced by evolution, development, learning and culture.

Some construction kits (e.g. Lego, Meccano, plasticine, sand) are concrete: using physical components and relationships. Others (e.g. grammars, proof systems and programming languages) are abstract: producing abstract entities, e.g. sentences, proofs, and new abstract construction kits. Mixtures of the two are hybrid kits. Some are meta-construction kits: they are able to create, modify or combine construction kits. Construction kits are generative: they explain sets of possible construction processes and possible products, with mathematical properties and limitations that are mathematical consequences of properties of the kit and its environment. Evolution and development both make new construction kits possible. Study of the FCK and DCKs can lead us to new answers to old questions, e.g. about the nature of mathematics, language, mind, science, and life, exposing deep connections between science and metaphysics. Showing how the FCK makes its derivatives, including all the processes and products of natural selection, possible is a challenge for science and philosophy. This is a long-term research programme with a good chance of being progressive in the sense of Lakatos. Later, this may explain how to overcome serious current limitations of AI (artificial intelligence), robotics, neuroscience and psychology

Page 95: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #4

Can Computers overcome Humans? Consciousness interaction and its implicationsCamilo Miguel Signorelli(Submitted on 7 Jun 2017 (v1), last revised 26 Jun 2017 (this version, v2))https://arxiv.org/abs/1706.02274This is a paradoxical and controversial question, particularly because there are many

hidden assumptions. This article focuses on that issue putting on evidence some misconception related with future generations of machines and the understanding of the brain. … In other words, trying to overcome human capabilities with computers implies the paradoxical conclusion that a computer will never overcome human capabilities at all, or if the computer does, it should not be considered as a computer anymore.

First, claims about new futurist robots do not define this set of distinctions; they do not care about the importance of what it is to be a human. Secondly, they assume a materialist view of these distinctions (i.e. these distinctions emerge from physical and reproducible interaction of matter) without explaining the most fundamental questions about the matter [4]. Thirdly, they do not explain how subjective experience or emotions could emerge from the theory of computation that they assume as a framework to build machines, which will overcome humans. In other words, these views do not explain foundations of computation that support or reject the idea of high level cognitive computers.

This work does not expect to solve these issues; on the contrary, the aim of this paper is briefly to put in evidence misconceptions and misunderstanding of some crucial concepts. Additionally, the importance of new concepts and ideas will be approached in a preliminary and speculative way, with the intention of expanding them in further works.

An affective computational model for machine consciousnessRohitash Chandra (Submitted on 2 Jan 2017)https://arxiv.org/abs/1701.00349

The rapid advances in emerging technologies such as Internet of Things (IoT) [Gubbi et al. 2013, Cited by 3041] is leading to increasingly large collection of data. IoT has the potential to improve the health, transportation, education, agriculture and other related industries. Apart from the dimensionality of the data, there are other challenging factors that include complexity and heterogeneous datasets which makes the area of big data challenging. Recent success in the area of deep learning for computer vision and speech recognition tasks have given motivation for the future implementation of conscious machines. Howsoever, this raises deeper questions on the nature of consciousness and if deep learning with big data can lead to features that contribute or form some level of consciousness. Through the perspective of integrated information theory (IIT), complex structures in the model with feedback loops could lead to certain degrees of consciousness. Therefore, from the deep learning perspective, conventional convolutional networks do not fall into this category as they do not have feedback connections. However, if we consider recurrent neural networks, some of the architectures with additional information processing would fall in the category of consciousness from the perspective of IIT. The challenge remains in incorporating them as components form part of a larger model for machine consciousness [Sloman and Chrisley 2003]. In such model, deep learning, IoT, and big data would replicate sensory perception.

The challenges lie in further refining specific features such as personality and creativity which are psycho-physically challenging to study and hence pose limitations to the affective model of consciousness. Howsoever, the proposed effective model can be a baseline and motivate the coming decade of simulation and implementation of machine consciousness for artificial systems such as humanoid robots. The simulation for affective model of consciousness with the features of artificial qualia manager can also be implemented with the use of robotics hardware. In their absence, simulation can also be implemented through collection of audiovisual data and definition of certain goals. The affective model is general and does not only apply to humanoid robots, but can be implemented in service application areas of software systems and technology.

Future directions can be in areas of artificial personality and artificially creative systems (e.g. Karwowski et al. 2016). These can be done by incorporating advancing technologies such as IoT, semantic web, cognitive computing and machine learning, along with artificial general intelligence

Page 96: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #5A: Recurrent or feed-forward, and the IIT framework?

http://dx.doi.org/10.1371/journal.pcbi.1004286, Cited by 4

In the last decade, Guilio Tononi has developed the Integrated Information Theory (IIT) of consciousness. IIT postulates that consciousness is equal to integrated information ( )Φ . The goal of this paper is to show that IIT fails in its stated goal of quantifying consciousness.

The Sciences of Consciousness: Progress and Problems: Center for Brains, Minds and Machines (CBMM), Christof Koch - Allen Institute for Brain Science, https://youtu.be/4gT-1S3FO4s?t=1h9m34s “Not pleasing the people worshipping alter of computalism”

FEED-FORWARD ZOMBIE

= Φ0

Nature Reviews Neuroscience 17, 307–321 (2016) doi: 10.1038/nrn.2016.22

http://dx.doi.org/10.1371/journal.pcbi.1003588http://dx.doi.org/10.1371/journal.pcbi.1003588,Cited by 105

“Despite the functional equivalence, the feedforward system is unconscious, a ‘‘zombie’’ without phenomenological experience, since its elements do not form a complex.”

IIT 3.0 would predict thatpurely feed-forward systems

such as Google's Inception-based GoogleNet cannot be conscious

= 0 Φ

Page 97: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #5B: Retina already highly recurrent

https://arxiv.org/abs/1604.03640Center for Brains, Minds and Machines, McGovern Institute, MIT

PLoS Biol. 2011 May; 9(5): e1001058.Published online 2011 May 3. doi: 10.1371/journal.pbio.1001058 PMCID: PMC3092782After 40 Years, Retina Reveals It Uses Positive Feedback, as Well as NegativeRichard Robinson

Eye smarter than scientists believed: neural computations in circuits of the retinaTim Gollisch, Markus MeisterNeuron. 2010 Jan 28;65(2):150-64 http://dx.doi.org/10.1016/j.neuron.2009.12.009

Feedback, both Fast and Slow: How the Retina Deals with Redundancy in Space and Time Richard Robinsonhttps://doi.org/10.1371/journal.pbio.1001865

Towards a Deep Learning Model of Retina: Retinal Neural Encoding of Color Flash PatternsAntonio Lozano, Javier Garrigós, J. Javier Martínez, J. Manuel Ferrández, Eduardo FernándezIWINAC 2017: Natural and Artificial Computation for Biomedicine and Neuroscience pp 464-472https://doi.org/10.1007/978-3-319-59740-9_46

Circuits in the retina: Deep learning as a biological modeling toolDawna Bagherian*, Taehwan Kim, Yisong Yue, Markus MeisterNIPS 2016 Brains and Bits: Neuroscience Meets Machine Learninghttp://www.stat.ucla.edu/~akfletcher/brainsbits.html

https://www.slideshare.net/PetteriTeikariPhD/datadriven-ophthalmology Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information ProcessingAnnual Review of Vision Science | Vol. 1:417-446 (Volume publication date November 2015)

Nikolaus Kriegeskorte | Medical Research Council Cognition and Brain Sciences Unit, University of

Cambridge

https://doi.org/10.1146/annurev-vision-082114-035447

Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its roleMasoud Ghodratiab, Seyed-Mahdi, Khaligh-Razavi, Sidney R.LehkyProgress in Neurobiology Volume 156, September 2017, Pages 214-255https://doi.org/10.1016/j.pneurobio.2017.06.002

And the retina-LGN-visual cortices network

is just the sensory part of visual processing

Page 98: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelMachine Consciousness #6

The Morphospace of ConsciousnessXerxes D. Arsiwalla, Clement Moulin-Frier, Ivan Herreros, Marti Sanchez-Fibla, Paul Verschure(Submitted on 31 May 2017 (v1), last revised 8 Jun 2017 (this version, v2))https://arxiv.org/abs/1705.11190

Based on current clinical scales of consciousness, that measure cognitive awareness and wakefulness, we take a perspective on how contemporary artificially intelligent machines and synthetically engineered life forms would measure on these scales. To do so, we argue that awareness and wakefulness can be associated to computational and autonomous complexity respectively. Then, building on insights from cognitive robotics, we ask what function consciousness serves, and interpret it as an evolutionary game-theoretic strategy. We make the case for a third type of complexity necessary for describing consciousness, namely, social complexity. Having identified these complexity types, allows us to represent both, biological and synthetic systems in a common morphospace. This suggests an embodiment-based taxonomy of consciousness. In particular, we distinguish four forms of consciousness, based on embodiment: biological, synthetic, group (resulting from group interactions) and simulated consciousness (embodied by virtual agents within a simulated reality). Such a taxonomy is useful for studying comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the crossroads of neuroscience, biomedical engineering, artificial intelligence and biomimetics.

Clinical scales of consciousness. A clustering of disorders of consciousness in humans represented on scales of awareness and wakefulness

What does an agent operating in a social world need to do in order to optimize its fitness? This comprises the six fundamental problems that the agent is faced with, together referred to as the H5W problem [Verschure 2016]:

In order to act in the physical world an agent needs to determine a behavioral procedure to achieve a goal state; that is, it has to answer the HOW of action. In turn this requires the agent to:

Define the motivation for action in terms of its needs, drives and goals; that is, the WHY of action.

Determine knowledge of objects it needs to act upon and their affordances in the world, pertaining to the above goals; that is, the WHAT of action.

Determine the location of these objects, the spatial configuration of the task domain and the location of the self; that is, the WHERE of action.

Determine the sequencing and timing of action relative to dynamics of the world andself; that is, the WHEN of action.

Estimate hidden mental states of other agents when action requires cooperation orcompetition; that is, the WHO of action

No discussion on conscious machines is complete without the very important issue of ethics. Both, the societal impact and ethical considerations of any form of advanced machine, especially conscious machines, for obvious reasons, constitutes a very serious issue. For example, the impact of medical nanobots for removing tumors, attacking viruses or non-surgical organ reconstruction has the potential to change medicine forever. Or AI systems to clear pollutants from the atmosphere or the rivers are absolutely essential for some of the biggest problems that humanity faces. However, as discussed above, purely increasing the performance of a machine along the computational axis will not constitute consciousness as along as these capabilities are not accessible by the system to autonomously regulate or enhance its survival drives. On the other hand, whenever the latter is indeed made possible, issues of societal interactions of machines with humans and the ecosystem, becomes an imminent ethical responsibility. It becomes important to understand the kind of cooperation-competition dynamics that a futuristic human society will face. Early stages of designing such machines are probably the best times to regulate their future impact on society. This analogy might not be surprising to any parent that has a child. Hence, a serious effort towards understanding the evolution of complex social traits is crucial alongside engineering advances required for the development of these systems

Page 99: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelSynthetic Biology

Synthetic biology: does industry get it?February 08 2017Organizers: Professors Paul Freemont and Ben Davis, and Steve Bateshttps://royalsociety.org/science-events-and-lectures/2017/02/tof-synbio/

Ginkgo BioWorks Microsoft Computational Biology Platform

Craig Venter: Watch me unveil “synthetic life”https://www.ted.com/talks/craig_venter_unveils_synthetic_life

PLAYING GOD: Scientists just two years away from making SYNTHETIC LIFEhttp://www.express.co.uk/news/science/777512/PLAYING-GOD-SYNTHETIC-LIFE-sc2-0-yeast-dnaFri, Mar 10, 2017

Synthetic life breakthrough could be worth over a trillion dollars https://www.theguardian.com/science/2010/may/20/craig-venter-synthetic-life-genome

Thursday 20 May 2010

Covering ‘shoes from spider silk’, ‘Moore’s Law for the SynBio industry’, and ‘biologists as rock-star designers’, Dr Jason Kelly, Ginkgo BioWorks, highlighted synthetic biology’s progress into new markets, thanks to biology’s fundamental programmability.

Examples included artificial leather, major-brand sports shoes and designer clothing ranges incorporating artificial spider silk peptides, and yeasts that produce the fragrances of roses or the flavours of mint. Ginkgo BioWorks, which now has ten flavour and fragrance cultured products, is making its own move into a total market for all plant extracted ingredients worth in excess of $56 billion.

Industry 5.0—The Relevance and Implications of Bionics and Synthetic BiologyPeter Sachsenmeier | Engineering Volume 2, Issue 2, June 2016, Pages 225-229https://doi.org/10.1016/J.ENG.2016.02.015

This article provides a general taxonomy in which the development of bioengineering is classified in five stages (DNA analysis, bio-circuits, minimal genomes, protocells, xenobiology) from the familiar to the unknown, with implications for safety and security, industrial development, and the development of bioengineering and biotechnology as an interdisciplinary field. Ethical issues and the importance of a public debate about the consequences of bionics and synthetic biology are discussed.

In terms of turnover, industrial biotechnology as yet constitutes only a one-digit percentage of the overall turnover of the chemical industry worldwide. Biofuels, enzymes, antibiotics, vitamins, and amino acids are some of its products, for applications such as medicine, human food, animal feed, detergents, and other industries.

In the near future, second-generation bio-products will supplement classical methods of mutation and genetic selection. The new methods include metabolic engineering and system biology, that is, genetic changes in organisms, or the introduction of donor genes from other organisms. New metabolisms will help to generate building blocks for new, specialized plastics. Artemisinin (against malaria), hydrocortisone, and penicillin will be produced in yeasts, provided that this can be done naturally, in relevant quantities, and at acceptable cost

Page 100: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelSynthetic Biology the Startup scene

RebelBio by Elsa Sotiriadis Posted September 19, 2017

Biology is becoming the new digital. Europe has been steadily rising as a global hub for innovation and is attracting big bets from veteran VC investors. It has even been hailed as the “New Silicon Valley.” A total of 15 multidisciplinary teams from across the world have begun the latest program at RebelBio, garnering an investment of over $100,000 for each company. In addition to gaining access to fully equipped labs and office spaces, they also draw from a network of hundreds of mentors, including RebelBio-founder Bill Liao, who also cofounded Xing, Davnet, and CoderDojo.

Synthetic biology and Europe as analogized https://youtu.be/OJ-ELYdxmHE?t=42s

Grapeless Wine And Cowless Milk: 60+ Synthetic Biology Startups In A Market MapFebruary 22, 2017Life Sciences/Healthcare

Page 101: Notes on "Artificial Intelligence in Bioscience Symposium 2017"

Beyond PanelEthics of synthetic life / artificial life

Stud Hist Philos Biol Biomed Sci. 2013 Dec; 44(4): 688–696.doi: 10.1016/j.shpsc.2013.05.016

Is the creation of artificial life morally significant?Thomas Douglas, Russell Powell, and Julian Savulescu

Though we have not argued for this view here, we believe that the capacity to create kinds of life that could never naturally exist does raise deep moral issues about the interests of those beings, their moral status, and the risks they pose to other beings. However, in our view, what matters, in answering those questions, is not how life is created but what non-genealogical properties it has.

Camb Q Healthc Ethics. 2015 Jan;24(1):58-65. doi: 10.1017/S0963180114000309

We must create beings with moral standing superior to our own.Rakić V. Moreover, even if morally enhanced postpersons decide to annihilate mere persons, we can conclude, deductively, that such a decision is by necessity a morally superior stance to the wish of mere persons (i.e., morally unenhanced persons) to continue to exist.

Conclusion

1) The creation of postpersons is imaginable. Overcoming the comprehension motivation gap is not “beyond human expressive powers.”

2) The creation of postpersons is desirable because of inductive arguments, offering evidence that makes such a conclusion probable. The creation of morally enhanced postpersons is also our moral duty, because of a deductive argument. Unlike the inductive arguments, which do not in themselves constitute a proof, my deductive argument is a better-substantiated claim that it is our moral duty to create morally enhanced postpersons (contrary to the claim that Agar (2013) makes).

3) As it is our moral duty to create morally enhanced postpersons, it is our moral duty to devote ourselves to moral enhancement—with the important proviso that such enhancement is to be voluntary. If it were compulsory, our status as moral agents would be downgraded.

By Emerging Technology from the arXiv | September 19, 2017https://www.technologyreview.com/s/608903/genetic-engineering-holds-the-power-to-save-humanity-or-kill-it/

Biotechnology and the lifetime of technical civilizationsJohn G. Sotos (Submitted on 4 Sep 2017)https://arxiv.org/abs/1709.01149

Bioethics. 2016 Jun;30(5):372-9. doi: 10.1111/bioe.12248.

Synthetic Biology and the Moral Significance of Artificial Life: A Reply to Douglas, Powell and Savulescu

Christiansen A Abstract I discuss the moral significance of artificial life within synthetic biology via a discussion of Douglas, Powell and Savulescu's paper 'Is the creation of artificial life morally significant'. I argue that the definitions of 'artificial life' and of 'moral significance' are too narrow. Douglas, Powell and Savulescu's definition of artificial life does not capture all core projects of synthetic biology or the ethical concerns that have been voiced, and their definition of moral significance fails to take into account the possibility that creating artificial life is conditionally acceptable. Finally, I show how several important objections to synthetic biology are plausibly understood as arguing that creating artificial life in a wide sense is only conditionally acceptable.