TITAN V has the power of 12 GB HBM2 memory and 640 Tensor Cores, delivering 110 TeraFLOPS of performance. CEO Jensen Huang's GTCJapan keynote on Dec. 13th, 2017
Dr. Theodore Berger's research is currently focused primarily on the hippocampus, a neural system essential for learning and memory functions.
Theodore Berger leads a multi-disciplinary collaboration with Drs. Marmarelis, Song, Granacki, Heck, and Liu at the University of Southern California, Dr. Cheung at City University of Hong Kong, Drs. Hampson and Deadwyler at Wake Forest University, and Dr. Gerhardt at the University of Kentucky, that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for long-term memory. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease), and is considered to underlie the memory deficits characteristic of these neurological conditions.
The essential goals of Dr. Berger's multi-laboratory effort include: (1) experimental study of neuron and neural network function during memory formation -- how does the hippocampus encode information?, (2) formulation of biologically realistic models of neural system dynamics -- can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event?, (3) microchip implementation of neural system models -- can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization?, and (4) creation of conformal neuron-electrode interfaces -- can cytoarchitectonic-appropriate multi-electrode arrays be created to optimize bi-directional communication with the brain? By integrating solutions to these component problems, the team is realizing a biomimetic model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus.
Several laboratories are now using Focused Ion Beam Scanning Electron Microscopes (FIB-SEM) to image small volumes of plastic embedded brain tissue at resolutions approaching 5x5x5nm voxel size. The fact that FIBSEM can obtain such resolution is of fundamental importance since at this resolution all neuronal processes should be traceable with 100% accuracy using fully automatic algorithms. A fundamental physical limitation of the FIB ablation process is that this resolution can only be obtained for very small samples on the order of 20 microns across. To overcome this limitation Ken Hayworth has developed a technique using a heated, oil-lubricated, ultrasonically vibrating diamond knife which can section large blocks of plastic-embedded brain tissue into 20 micron thick strips optimally sized for high-resolution FIB-SEM imaging. Crucially, this thick sectioning procedure results in such high-quality surfaces that the finest neuronal processes can be traced from strip to strip.
DNA stores and replicates information. Special sequences of different nucleic acids (adenine, cytosine, guanine, thymine) encode life's blueprints. These nucleic acids can be divided into a classical part (massive core) and a quantum part (electron shell and single protons). The laws of quantum mechanics map the classical information (A,C,G,T) onto the configuration of electrons and position of single protons. Although DNA replication requires perfect copies of the classical information, the core that constitutes this information does not directly interact with the copying machine. Instead, only the quantum degrees of freedom are measured. Thus successful copying requires a correct translation of classical to quantum to classical information. It has been shown that the electronic system is well shielded from thermal noise. This leads to entanglement inside the DNA helix. It is an open question if this entanglement influences the genetic information processing. In this talk I will discuss possible consequences of entanglement for the information flow and the similarities and differences between classical computing, quantum computing and DNA information processing.
Big Data, AI, and social media echo chambers can feel scary, but if harnessed correctly they can dramatically improve our quality of life. The potential for improvement comes first from better scientific understanding of our human minds and bodies, and second from a more open and shared understanding of society, government, and our day-to-day lives. The key to achieving these positive results is aggressive pursuit of a new, broad science of human life to unify the traditional and narrow sciences, and making data a trusted and safe resource for everyone. We are building such systems today, and are changing “business as usual” for governments around the world, as well as beginning to unify fragmented social and computational sciences.
Presented at TTI/Vanguard's Networks, Sensors, & Mobility May 3–4, 2016 San Francisco, CA Alex Kendall, Department of Engineering, University of Cambridge
We can now teach machines to recognize objects. However, in order to teach a machine to “see” we need to understand geometry as well as semantics. Given an image of a road scene, for example, an autonomous vehicle needs to determine where it is, what's around it, and what's going to happen next. This requires not only object recognition, but depth, motion and spatial perception, and instance-level identification. A deep learning architecture can achieve all these tasks at once, even when given a single monocular input image. Surprisingly, jointly learning these different tasks results in superior performance, because it causes the deep network to uncover a better deep representation by explicitly supervising more information about the scene. This method outperforms other approaches on a number of benchmark datasets, such as SUN RGB-D indoor scene understanding and CityScapes road scene understanding. Besides cars, potential applications include factory robotics and systems to help the blind.
This talk describes the current research path towards intelligent, semi-autonomous systems, where both humans and automation tightly interact, and together, accomplish tasks such as searching for survivors of a hurricane using a team of UAVs with sensors with highly efficient interaction. This talk is describes the current state of the art in 1) intelligent robotic (only) systems, 2) modeling human decisions and 3) semi-autonomous systems, with a focus on information exchange, and command and control.
Mark Campbell is the S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering at Cornell University.
Classical optimization techniques have found widespread use in machine learning. Convex optimization has occupied the center-stage and significant effort continues to be still devoted to it.
Pattern Analysis, Statistical Modelling and Computational Learning » NIPS Workshop on Optimization for Machine Learning, Whistler 2008.
Training a Binary Classifier with the Quantum Adiabatic Algorithm
Polyhedral Approximations in Convex Optimization
Optimization in Machine Learning: Recent Developments and Current Challenges
Large-scale Machine Learning and Stochastic Algorithms
Online and Batch Learning Using Forward-Looking Subgradients
Robustness and Regularization of Support Vector Machines
Tuning Optimizers for Time-Constrained Problems using Reinforcement Learning.
Join leading researchers Dr. Eric Horvitz of Microsoft Research and Dr. Peter Norvig of Google for an intriguing discussion about the past, present, and future of artificial intelligence, moderated by KQED's Tim Olson.
This course will provide a simple unified introduction to batch training algorithms for supervised, unsupervised and partially-supervised learning. The concepts introduced will provide a basis for the more advanced topics in other lectures.
The first part of the course will cover supervised training algorithms, establishing a general foundation through a series of extensions to linear prediction, including: nonlinear input transformations (features), L2 regularization (kernels), prediction uncertainty (Gaussian processes), L1 regularization (sparsity), nonlinear output transformations (matching losses), surrogate losses (classification), multivariate prediction, and structured prediction. Relevant optimization concepts will be acquired along the way.
The second part of the course will then demonstrate how unsupervised and semi-supervised formulations follow from a relationship between forward and reverse prediction problems. This connection allows dimensionality reduction and sparse coding to be unified with regression, and clustering and vector quantization to be unified with classification—even in the context of other extensions. Current convex relaxations of such training problems will be discussed.
The last part of the course covers partially-supervised learning—the problem of learning an input representation concurrently with a predictor. A brief overview of current research will be presented, including recent work on boosting and convex relaxations.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.