[logic-ml] NIIレクチャーシリーズのお知らせ(Randy Goebel)
Ken Satoh
ksatoh at nii.ac.jp
Fri Jan 11 17:19:17 JST 2013
NIIレクチャーシリーズのお知らせ
国立情報学研究所(NII)では、海外の情報学に関係する著名研究者を招聘し、レクチャーシリーズを行っております。
今回は、AIの黎明期からAIの研究をされてきたカナダ・アルバータ大学のRandy
Goebel先生の連続レクチャーのお知らせです。
今回はAI原理をビッグデータに応用する最先端の研究の講演を行っていただきます。
講師:Prof. Randy Goebel (department of Computing Science at the University
of Alberta, in Edmkonton, Alberta, Canada)
講義名 :Do the emerging tools for managing big data fit with the founding
principles of Artificial Intelligence?
http://www.nii.ac.jp/en/event/list/0212
場所:国立情報学研究所 20階 2010室
講義日:2013/2/12, 20, 26, 3/1
時間: 13:30-15:00
出席は無料で、どなたでも参加できます。
参加のご検討よろしくお願いします。
佐藤 健
国立情報学研究所および総研大
=======
NII Lecture Series Title:
Do the emerging tools for managing big data fit with the founding
principles of Artificial Intelligence?
ideas on the integration of the advice taker, structured inference,
reasoning with incomplete information, and building multi-scale models from
data.
Speaker: Prof. Randy Goebel (department of Computing Science at the
University of Alberta, in Edmkonton, Alberta, Canada)
He is also vice president of the innovates Centre of Research Excellence
(iCORE) at Alberta Innovates Technology Futures (AITF), chair of the Alberta
Innovates Academy, and principle investigator in the Alberta Innovates
Centre for Machine Learning. He received the B.Sc. (Computer Science), M.Sc.
(Computing Science), and Ph.D. (Computer Science) from the Universities of
Regina, Alberta, and British Columbia, respectively.
At AITF, Randy is in charge of reshaping research investments (graduate
student scholarships, research chairs, research centres). His research
interests include applications of machine learning to systems biology,
visualization, and web mining, as well as work on natural language
processing, web semantics, and belief revision. Randy has experience working
on industrial research projects in crew scheduling, pipeline scheduling, and
steel mill scheduling, as well as scheduling and optimization projects for
the energy industry in Alberta.
Randy has held appointments at the University of Waterloo, University of
Tokyo, Multimedia University (Malaysia), Hokkaido University (Sapporo), and
has had research collaborations with DFKI (German Research Centre for
Artificial Intelligence), NICTA (National ICT Australia), RWC (Real World
Computing project, Japan), ICOT (Institute for New Generation Computing,
Japan), NII (National Institute for Informatics, Tokyo), and is actively
involved in academic and industrial collaborative research projects in
Canada, Australia, Europe, and China.
Abstract:
The modern discipline of computer science has many facets, but what has
clearly emerged in the last decade are three themes based on 1) rapidly
accumulating volumes of data, 2) inter- and cross-disciplinary application
of computer science to all scientific disciplines, and 3) a renewed interest
in the semantics of complex information models, spanning a spectrum from
semantic web, natural language, to multi-scale systems biology.
This series of four lectures will attempt to knit together these three
themes, by presenting the ideas that have emerged in their support: the
rapid development and extension of machine learning theory and methods to
help make sense of accumulating volumes of data, the application of computer
science to nearly all scientific disciplines, especially those whose
progress now necessarily relies on the management and interpretation of
large data, and finally, the revival of a focus on semantics of information
models based on data.
Outline:
Lecture 1: Connecting Advice Taking and Big Data
Lecture 2: Structured inference and incomplete information
Lecture 3: Natural Language Processing: Compressing Data to Models
Lecture 4: Hypothesis Management with Symbols and Pictures
Place:
Lecture room 2010, 20th floor, National Institute of Informatics
Date:
13:30pm-15:00pm, February 12, 20, 26, March 1, 2013
Lecture 1
Connecting Advice Taking and Big Data
Tuesday, February 12, 2013, 13:30 - 15:00
A fundamental premise of Artificial Intelligence (AI) is the ability for
a computer program to improve its behaviour by taking advice. Incremental
accumulation of advice or knowledge has never been easier than today, when
the rate of data capture is higher than ever before, and the management of
big data and deployment of machine learning are coupled to help manage the
transition from data to knowledge. This lecture uses simple technical
concepts from nearly sixty years of AI, to identify some of the research
challenges of managing big data, and exploiting knowledge emergent from big
data. The goal is to find some important research priorities based on the
motivation of the Advice Taker, and the current state of big data management
and machine learning.
Lecture 2
Structured inference and incomplete information
Wednesday. February. 20, 13:30 - 15:00
If the foundation of Artificial Intelligence (AI) is the accumulation
and use of knowledge, then a necessary stop is the structuring knowledge to
be able to make inferences. The organization structures required to
facilitate inference now span a broad spectrum of mathematical methods,
including everything from simple propositional logic to sophisticated
statistical and probabilistic inference. The two foundational components of
computational inference are semantics of formal reasoning, and the
development of reasoning methods to deal with incomplete information. This
lecture reviews the foundational components of semantics and reasoning
systems, including the development of goal-oriented reasoning based on
abductive reasoning, the connection between logical and probabilistic
systems, and especially how the architecture of reasoning systems can
provide the basis for managing hypotheses in the face of incomplete
information.
Lecture 3
Natural Language Processing: Compressing Data to Models
Tuesday. February. 26, 13:30 - 15:00
The problem of machine processing of natural language (NLP) has long
been a research focus of artificial intelligence. This is partly because the
use of natural language is easily conceived as a cognitive task requiring
human-like intelligence. It is also because the rational structures for
computer interpretation of language require the full suite of computational
tools developed over the last hundred years (grammar, dictionaries, logic,
parsing, inference, and context management). Most of the recent practical
advances in NLP have arisen in the context of simple machine learning
applied to large language corpora, to induce fragments of language models
that provide the basis for interpretive and generative manipulation of
language. These largely statistical models are arisen in what has been
called the "pendulum swing" of NLP, in which statistical models have
recently dominated those based on structural linguistics. In this lecture,
we look at the concept of noisy corpora and their role in language models,
including some interesting alternative sources of data for building language
models. The applications range from complex language summary to the
information extraction from medical, legal, and historical documents.
Lecture 4
Hypothesis Management with Symbols and Pictures
Friday. March. 1, 13:30 - 15:00
The current suite of Artificial Intelligence (AI) tools has provided a
basis for sophisticated human-computer interfaces based on more than typing
in language. In fact, one can develop multi-level representations that
provide the basis for direct manipulation of visualizations. By constraining
the repertoire of direct manipulations, one can enrich human computer
interaction so that both humans and machines can understand and exploit
visual interaction. This lecture shows how such direction manipulation
requires a large repertoire of formal reasoning methods, and provides the
sketch of formal framework and the problems arising in its development.
More information about the Logic-ml
mailing list