autonomous asteroid exploration
TRANSCRIPT
-
7/27/2019 Autonomous Asteroid Exploration
1/14
1556-603x/13/$31.002013ieee November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 25
Digital Object Identifier 10.1109/MCI.2013.2279559
Date of publication: 16 October 2013
N.K. Lincoln
Faculty of Engineering
and the Environment,
University of Southampton, UK
S.M. Veres
Department of Automatic Control
and Systems Engineering,University of Sheffield, UK
L.A. Dennis, M. Fisher,
and A. Lisitsa
Department of Computer Science,
University of Liverpool, UK
Autonomous AsteroidExploration by Rational Agents
I. Introduction
Complex autonomous (robotic) missions require decisions to be made
taking into account multiple factors such as mission goals, priorities,
hardware functionality, performance and instances of unexpected
events. The concise representation and use of relevant knowledge
about action and sensing is only part of the autonomous control problem; the
organization of the necessary perception processes, prediction of the possible
outcomes of action (or inaction), and the communication required for deci-
sion making are also critical. The development of any autonomous system cul-
minates in a functional intelligent system, where the developed system
operates within a specified domain and implements procedures based upon
declarative, procedural and generally heterogeneous knowledge defined across
the operational domain. The consistency and integrity of this system knowl-edge is an essential aspect of the resultant system. Motivated by these problems,
our research has produced a formally verifiable deliberative agent architecture
linked to a natural language knowledge representation of the world model and
possible actions. This architecture has been applied in the context of autono-
mous space systems, both within simulated environments, and on hardware
within a purpose built ground facility [1, 2, 3].
Autonomy is a highly relevant topic for deep space missions where scientific
interest in asteroids acts as a technology driver to produce spacecrafts which per-
form complex missions at great distances from Earth. Two such missions, Haya-
busa and Dawn, run by JAXA and NASA respectively, are asteroid exploration
AbstractThe history of software agentarchitectures has been driven by the parallelrequirements of real-time decision-makingand the sophistication of the capabilities theagent can provide. Starting from reactive,rule based, subsumption through to layeredand belief-desire-intention architectures, acompromise always has to be reachedbetween the ability to respond to the envi-ronment in a timely manner and the provi-sion of capabilities that cope with relativelycomplex environments. In the spirit of thesepast developments, this paper is proposing a
novel anthropomorphic agent architecturethat brings together the most desirable fea-tures: natural language definitions of agentreasoning and skill descriptions, sharedknowledge with operators, the combinationof fast reactive as well as long term planning,ability to explain why certain actions aretaken by the autonomous agent and finallyinherent formal verifiability. With these attri-butes, the proposed agent architecture canpotentially cope with the most demandingautonomous space missions.
Image lIcensed by Ingram PublIshIng
Research supported by EPSRC through grants EP/F037201/1 and EP/F037570/1.
-
7/27/2019 Autonomous Asteroid Exploration
2/14
26 ieee ComputatioNal iNtelligeNCe magaziNe | November 2013
missions [4, 5]. The former, Hayabusa, was a sample return mis-
sion that targeted the Itokawa Near Earth Object and successfully
returned a soil sample to Earth in 2010. The latter mission, Dawn,
is a current mission targeted at the proto-planets Vesta and Ceres
that reside in the Asteroid belt between Mars and Jupiter.
Although successful, the Hayabusa mission ran into problems that
threatened its completion: in 2003 a solar flare damaged the solar
panels onboard the spacecraft reducing the efficiency of the pro-
pulsion system; in 2005 two reaction wheels failed, compromising
the attitude controllability for the remainder of the mission; the
release of the Minerva probing robot failed; a communication
blackout of nearly two months was caused by a fuel leakage; and,
in 2009 during the return cruise, an ion engine anomaly was
detected, which took 15 days to circumvent. The mission phases
and operations of Hayabusa were controlled from Earth and this
was a contributing factor to the failed Minerva probe release:
because of the communication time lag of 32 minutes between
the ground station and the spacecraft, the command to release the
probe was received during an automatically triggered ascentphase from the asteroid and the probe was consequently lost
in space.
These events, encountered during the Hayabusa mission,
highlight the need for robotic platforms to be capable of reli-
ably performing localized decision making to enable comple-
tion of their tasks while immersed in a dynamic, unknown, and
possibly hostile environment.
The agent programming system to be presented here devel-
ops the anthropomorphic belief-desire-intention agent pro-
gramming approach further by enabling efficient hierarchical
planning and execution capabilities. There are five important
benefits: (1) the method we propose simplifies agent operationsrelative to multi-layer agents by blending reactive and foresight
based behaviors through logic based reasoning. (2) Agent opera-
tions such as sensing, abstractions, task executions, behavior rules
and reasoning become transparent for a team of programmers
through the use of natural language programming. (3) The abil-
ity to use English language descriptions to define agent reason-
ing also means that operators and agents can have a shared
knowledge of meanings and procedures. (4) In our system it is
also straightforward to program an agent to make it able to
explain its selected actions or its problems to its operators. (5)
Despite its user friendliness, our system is formally verifiable by
model checking methods to ensure our agents always try to dotheir best.
The improvements proposed here signify important practi-
cal benefits for engineers designing and operating such autono-
mous systems.
II. Programming Approaches to Autonomous Systems
Current approaches to programming autonomous robot opera-
tions fall under the closely related domains of:
1) programming hybrid automata [6, 7, 8, 9]
2) agent oriented programming [10, 11, 12, 8, 13, 14]
3) programming vertical-horizontal multi-layered systems
[15, 16]
4) programming hierarchical planners and executors [17, 18].
In this section a brief review is given that is followed by a
list of the features of our programming system.
A. Hybrid-Automata-Based Autonomy
Hybrid systems model computer controlled engineering sys-
tems that interact with continuous physical processes in their
environment [19, 7]. Efforts to model agent systems as a series
of interconnected hybrid automata have been made [20, 21, 22]
and the commercial software, State FlowTM [23], is widely used
by industry. This motivated agent development using hybrid-
automaton based models [24].
To implement a deliberative agent system through a set of
hybrid automata would entail the representation of each of the
available plans of the agent as a hybrid automaton. The resulting
system would become very complex and a multi-agent system
would be expressed as a large parallel set of concurrent automata.
B. Multi-Layered AgentsMulti-layered agent systems aim to combine the timely nature
of reactive architectures (hybrid automata) with the analytic
approach to the environment that takes more time [25, 15].
Consequently, as their name would suggest, these systems
involve a horizontal or vertical hierarchy of interacting subsys-
tem layers. The complexity of potential information bottle-
necks, and the need for internal mediation within purely hori-
zontal architectures, are partially alleviated by adding vertical
architectures, though these structures do not easily provide for
fault tolerance [26, 16]. Knowledge based deliberation within
agent systems is performed with logical reasoning over sym-
bolic definitions that explicitly represent a model of the worldin which the agent resides and is encapsulated by the inten-
tional stance, wherein computational (agent) reasoning is sub-
ject to anthropomorphism. This is a promising approach for a
system to cope with the computational complexities associated
at high levels of automated decision making [27]. Although
there is no all-encompassing agent theory for multi-layered
agents, significant contributions have been made concerning
the properties an agent should have and how they should be
formally represented and reasoned [28, 29].
C. Agent Oriented Programming
Logical frameworks in which beliefs, desires and intentions(BDI) are primitive attitudes, as developed by Bratman and fol-
lowing the philosophy of Dennet [30, 28], are a popular format
for deliberative systems. Deliberative architectures, and their
logical foundations, have been thoroughly investigated and used
within numerous agent programming languages, including
AgentSpeak, 3APL, GOAL and CogniTAO as well as the Java
based frameworks of JADE, Jadex and mJACK [11, 31, 32, 33,
13]. Possibly the most widely known BDI implementations are
the procedural reasoning system (PRS) and InteRRaP [14].
PRS is a situated real-time reasoning system that has been
applied to the handling of space shuttle malfunctions, threat
assessment and the control of autonomous robots [34, 35].
-
7/27/2019 Autonomous Asteroid Exploration
3/14
November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 27
Within the PRS framework, a knowledge
database detailing how to achieve particular
goals or react to certain situations is interfaced
by an interpreter. In its concept this is similar
to a horizontally layered Touring machine [10]
with parallel layers for modelling, planning
and reactive behavior, though notions of intention are con-
tained and consequently place PRS as a deliberative BDI
model. InteRRap is a hybrid BDI architecture with three sepa-
rate layers incorporating behavior, local planning and coopera-
tive planning. These interface the world state, model and social
knowledge bases, respectively. These knowledge bases are in
turn fed through perception and communication processes.
Ultimately it is the lowest level behavioral layer that is responsi-
ble for environmental interaction, though this is augmented by
higher levels that involve deliberation processes to enable the
attainment of high level goals [16].
D. Hierarchical Planner and Executor SystemsTwo closely related frameworks, MDS (Mission Data System)
and CLARAty (Coupled Layer Architecture for Robotic
Autonomy) [17], provide frameworks for the development of
goal-based systems intended for, though not limited to,
robotic space systems. MDS is an architectural framework
encapsulating a set of methodologies and technologies for
the design and development of complex, software intensive,
control systems on a variety of platforms that map high level
intentions (goals) through to actions. The system architecture
focuses on expressing the physical states that a control system
needs to manage and the interactions between these states
[36]. CLARAty is a two-layer object oriented software infra-structure for the development and integration of robotic
algorithms. Its prime purpose is to provide a common reus-
able interface for heterogeneous robotics platforms by using
established object oriented design principles. In a CLARAty
architecture two layers separate a mainly declarative program-
ming within a Decision Layer and a mainly procedural pro-
gramming within a Functional Layer. It is the functional
layer that encapsulates low and mid-level autonomous capa-
bilities, whereas the decision layer encapsulates global reason-
ing about system resources and mission constraints. Both
MDS and CLARAty seek the application of a generic frame-
work for the development of goal based systems on hetero-geneous platforms. More specifically, MDS seeks to link sys-
tem abstractions between systems engineers and software
engineers through model based design; CLARAty seeks to
promote reusable code to enable advancement of robotic
control algorithms.
An interesting middle ground between BDI systems and
complex decision trees is the Remote Agent technology
deployed on the DeepSpace1 technology proving mission [37].
Remote Agent, an agent system based upon model-based diag-
nosis (MBD), integrated three separate technologies: an on-
board planner-scheduler (EUROPA), a robust multi-threaded
executive, and a model-based fault diagnosis and recovery sys-
tem (Livingstone). In keeping with the ethos of agent systems,
rather than being commanded to execute a sequence of com-
mands, the system design of Remote Agent was such that the
attainment of a list of goals was sought [38].
E. Hybrid Automata Versus Deliberative Agents
The potential state explosion resulting from replicated plan
representations within discrete states of the hybrid system is
touched upon in [20] and discussed in more detail in [39].
Moreover, in constructing a deliberative agent from the per-
spective of hybrid automata involves a loss of expressivity; the
representation as hybrid automata often loses the explicit differ-
entiation between what the agent can do and what it will actu-ally choose to do. Abstraction systems in hybrid automata are
implicit. This makes system operations more difficult to under-
stand, especially when things go wrong.
F. Planner-Executive Systems Versus Deliberative AgentsDeliberative agents have a large number of preprogrammed
plans and focus on selecting the appropriate plan for execution
given the current situation. Some deliberative agent approaches
such as Jason [13] and Jade [31] can also generate their own
plans, or access sub-systems dedicated to planning based on the
underlying capabilities of the systems. Other systems, such as
CLARAty [17], implement continuous planning and executionbased approaches that consequently makes planning the centre
piece of an agents existence.
Deliberative agents use a reasoning cycle which first
assesses the current situation using reasoning by logic and
then selects and/or executes a plan. This assessment is based
upon anthropomorphic concepts such as beliefs, goals and
intentions. Planning procedures and temporal-logic-based
hypothetical inference (about consequences of future actions)
can be accommodated within these deliberative agent frame-
works. Two-layer planner/executor-functional agents, on the
other hand, focus all their activity on efficient real-time
replanning. Instead of complete replanning, deliberativeagents can have access to plan-libraries defined for them as
their knowledge and skill base. In many ways planner-execu-
tive systems and deliberative agents do the same job in differ-
ent ways but the most striking difference is in terms of
human like reasoning in deliberative agents that tends to
make them more able to collaborate with humans operators.
G. Features of Our Approach
Creating a new software architecture with significant benefits
for autonomous robot control is a difficult problem when
there are excellent software packages around [11, 13, 31, 14,
17]. In this section we identify the possible aspects of further
An agentis a software that enables a robot to makeits own decisions to achieve some goals, plan aheadand keep itself to behavior rules.
-
7/27/2019 Autonomous Asteroid Exploration
4/14
28 ieee ComputatioNal iNtelligeNCe magaziNe | November 2013
progress in terms of new features or strengthening existing
features. Our architecture enables a programmer to equip the
agent with the intelligence features classified in [40] as cogni-
tive intelligence, social intelligence, behavioral intelligence,
ambient intelligence, collective intelligence and genetic intelli-
gence. We also emphasize the existence of shared concepts and
understanding between humans and the agents and the agents
natural ability to explain why it has chosen a given action.
Another important consideration is simplicity and sharability
of agent code during the development and easy maintenance
during its use.
achc aochA model-based approach is taken for handling data with struc-
tural templates provided by an ontology of the agent. Class
names and their attributes are precise professional terms under-
standable by operators of agents. In the delib-
eration process a balance is created between
the amount of real-time planning and use of a
priori plans that are available. This enables
seamless blending of fast reactive behavior and
slow contemplative evaluation of long term
consequences of agent actions and environmental events.
pogng pdgs nd lnggs
Both declarative and procedural programming is used in three
layers: natural language program (NLP) compiles into embed-
ded MATLAB code that compiles into standard Java or C++.
At the top level abstractions expressed in natural language pro-
gramming (sEnglish) form layers of abstractions to define
operations that lend themselves to human interpretation. In
principle a similar route can be used to compile natural lan-
guage into declarative rational agent code which then runs on
a Java-based interpreter. In this paper we focus on declarative
agent code in the Gwendolen language which must be createddirectly by the programmer, however we have also investigated
the use of the Jason agent language and the programming of
Jason agents using the same natural language interface as that
which links to MATLAB.
Doyn achc
Distributed homogeneous and also TCP/IP connected hetero-
geneous sets of processors can be used where Java and C++/
ROS can run. The primary approach is soft-real time but hard
real time implementation is also feasible.
Dvon envonnThe current development environment is a mixture of Eclipse
(sEnglish/Java/Gwendolen), MATLAB/Simulink and the
ROS/C++ development systems.
Docnon
The high level code for agent capabilities is written using natu-
ral language programming. The BDI agent code is in a declara-
tive language such as Gwendolin or Jason, also presentable
using sEnglish. Low level code is in MATLAB/C++/Java and
documented in standard ways. The natural language descrip-
tions of capabilities facilitate communication within the devel-
opment team, reduce the effort for users and enable the cre-ation of information repository to capture the development
process for future use and maintenance.
III. Programming Paradigm Descriptions
Our research has focused on a novel agent architecture for
autonomous systems. The broad operation of the architecture
considered here is illustrated in Figure 1, where an agent may
invoke the execution of an action encapsulated within the set
of available agent plans P, based upon the set of raw data avail-
able from the agents sensors, ,EW and the set of discrete state-
ments that represent abstractions of the raw data in , ,LEW is
formally defined here as:
Abstractor of
Higher Level Statesand Reasoning by
Logic Inferencer
}E
Abst. r1 Context }1
Context }2
Plan 1
E
Plan 2
Plan 3
Plan 4
Plan N
Context }3
Context }4
Abst. r2
Abst. r3
Abst. r4
Context}5
P
a
Abst.rN
Sensor/Perception
Stream
Figure 1 Operational schematic of a deliberative architecture: yel-low blocks represent executable plans that modify the environmentE where an agent operates. A perception stream from the environ-
ment is abstracted and evaluated against contexts and abstractions.A plan selector function, ,a results in the chosen executable plan P.
Deliberation of an agent is its decision making tochose which of its plans and skills to execute at anymoment of time to achieve its goals.
-
7/27/2019 Autonomous Asteroid Exploration
5/14
November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 29
Dbv agn
A deliberative agent is a tuple { , , , , }ag E P r } a= where the
components are as follows:
E is a model of the environment, including the physical
dynamics of any agent present in the environment;
: ( ) ( )P LPE "r W1 is a function that converts raw sensor
data into abstract statements in first order logic;
E1} W is a set describing the agents current perceived
context;
P is a set of the executable plans of the agent; and
: ( ) PP L "a is aplan selector function that selects an individ-
ual plan p P! using information from the abstractions of
the sensor data created by .r a is, potentially, non-determin-
istic though it need not be.
The agent, ,ag evolves in environment E during successive
reasoning cycles that involve a selecting a new executable plan
from P based on which abstractions of } currently hold, or
leaving the currently chosen plan to continue execution. This is
a general definition that is not specific to the composition of} and Pnor to individual implementations ofr and .a
The structure and connection of modules in a software to
implement the theoretical scheme in Figure 3 can look different.
Figure 2 illustrates the functional architecture of the agent that is
implemented within this article, which
comprises an augmented high-level rea-
soning system linked through an abstrac-
tion layer to low-level control and sensor
systems. Such real-time control and sens-
ing processesform a Physical Engine ( )P
that is situated in an environment that
may be real or simulated; P consists ofthe aspects that are able to sense and
effect change in the environment. P
communicates with anAbstraction Engine
( ) .A It is here that perception data is
sampled from P and subsequently fil-
tered to prevent flooding of the belief
base that belongs to the Rational Engine
( ),R which dictates processes occurring
within P via .A R is the highest level
within the system and contains a Sense-
Reason-Act loop. Sensing here relates
to the perception of changes within thebelief base that is being modified by ,A
via filtered abstraction of data from ,P
which may result in reasoning over new
events and result in actions necessitating
interaction with either P or the Contin-
uous Engine ( ) .X X augments R and is
utilized to perform complex numerical
procedures that may be used to assist rea-
soning processes within R or generate
data that will be required for a physical
1 XP h denotes the subsets of a set X.
process occurring within ;P consequently a separate communi-
cation channel exists between P and X to enable direct infor-
mation transfer between these engines without mediation from
.A All actions performed by R are passed through the Abstrac-
tion Engine for reification. In this way, R is a traditional BDI
agent dealing with discrete information, P and X are traditional
control systems, while A provides the vital glue between all
these parts by hosting primary communication channels and
translating between continuous and abstract data. Interaction
between the components in the architecture is governed by a lan-
guage independent operational semantics [1].
The agent programming language within the Rational Engine
encourages an engineer to express decisions in terms of the facts
available to an agent, what it wants to achieve and how it will cope
with any unusual events. This reduces code size so an engineer need
not explicitly describe how the spacecraft should behave in each
possible configuration of the system, but can instead focus on those
facts that are relevant to particular decisions [39]. The key aspect of
deliberation within agent programs allows the decision making partof the system to adapt intelligently to changing dynamic situations,
changing priorities, and unreliable hardware systems. The distinctive
features of our agent programming approach, which go beyond
those of Jason or Gwendolen are as follows.
Environment
(Simulation or Real)
Propagate World
Physical Engine (P)
AbstractionLayer (A)
ReasoningEngine (R)
Continuous Engine (X)
Sense/Act Loop(s)
Sense/Reason/ActLoop(s)
Continuous Action
Abstract Action Abstract Sense Abstract Query
Data Flow
Control Flow
Continuous Sense Continuous Query
Calculate
Figure 2 System structure: this maps onto the algorithmic description in Figure 1 by theAbstraction layer mapping onto the Abstractor of higher level states, the Physical Enginemaps to Sensor/Perception Stream, the Continuous Engine to P and the Environment to E.
The reasoning engine block corresponds to the deliberative actions responsible for the appro-priate selection of plans, and performs the role of .a
-
7/27/2019 Autonomous Asteroid Exploration
6/14
30 ieee ComputatioNal iNtelligeNCe magaziNe | November 2013
1) A natural language based organization of events and rela-
tions in the physical and continuous engines is implement-
ed. The capabilities available to the agent are specified fully
using a natural language, enabling a natural semantic
bridge between agent intentions and system actions. This
feature enables the world modeling operations and capabil-
ities of the agent to be described in a natural language doc-
ument that itself compiles into code, whilst also capable of
being read by human operators. Consequently, heteroge-
neous knowledge of procedures and world models with
the agent and human operators is made possible.
2) The organization of all procedural code is based upon
natural language sentence structures, whilst differing from
the strict object oriented principles followed within
CLARAty, enables code reuse through common opera-
tional abstractions.
3) The BDI agent language responsible for rational actions, a
customized variant of the Gwendolen programming lan-
guage, may be verified by model checking.4) Three component engines, a physical engine, continuous
engine and reasoning engine, are integrated via an abstrac-
tion layer. The slow deliberative and fast reactive responses
are naturally blended together and are not organized into
layers. Timing is determined by the availability of abstract
data completed by the physical and continuous engines and
ready for use by the reasoning engine.
5) Rationality is based on the BDI deliberative agent para-
digm, as opposed to real-time planning.
Points 1 and 2 link the two core desires of MDS and CLARAty
by providing a linked and transparent abstraction between high
level system operation and low level function using linguistics,whilst enabling code reuse via common operational abstractions.
Furthermore, the same natural language abstractions can be uti-
lized by the deliberative agent for reasoning.
A. Natural Language Programming of Agent ActionsThe capabilities available to an agent implementing the pre-
sented architecture are contained within the P and X engines;
these actions, either computational data manipulation or com-
plex interaction with hardware devices, are developed in a natu-
ral language facilitated by the sEnglish publisher, an Eclipse
based design environment [41] (break-out box sEnglish).
Just as the agent programming language used within the
Rational Engine encourages an engineer to express decisions
in terms of the beliefs available to an agent, abstracting agent
capabilities in a natural language encourages the development
engineer to encode specific skills in abstractions that are then
shared between operating personnel and the agent. The capa-
bilities abstracted may be divided into P and X abilities. P
abilities, or skills, relate to specific physical actions the agent
may invoke on the world environment;X
skills are thoserelated to complex queries that may be used to assist rational
decision making occurring within R or concerning specific
P skills. It is this shared understanding generated by the
abstraction of agent capabilities that enables coherent develop-
ment of all rational processes that are linked to agent actions.
Development using sEnglish commences by defining a cen-
tral ontology, ,O to define the concepts pertinent to the target
system and that will be used within a natural language program
document (NLPdocument) { , , , }P O S N m= where S is set
of sentences in a natural language ,NN is and underlying pro-
gramming language such as MATLAB and m is a meaning
definition function that assigns code in N to sentences in S.An NLP text, ,T may be formed through composition of sen-
tences from S and can be denoted by .S* Abstraction of a spe-
cific procedural action is performed through an expanding tree
of sentences, whereby the trunk represents a core abstract
action and a leaf represents a trivial component computation
expressed in a target code language .N In practice the Eclipse
plugin ofsEnglish Publisherfacilitates the mapping ofS to code
in N [41, 42]. This methodology provides a natural abstraction
link between the high level meaning of a particular action,
which is the handle used by rational actions, and the low level
specifics of the action performative itself, which operates on
real-time hardware.
B. Rational Agent Decisions via Gwendolen
Rational decision making is based on symbolic reasoning using
plans that are implemented in the Rational Engine, ,R and
described with a specialised rational agent (BDI) language
based upon the Gwendolen programming language [43].
Gwendolen is implemented in the Agent Infrastructure Layer
(AIL), a collection of Java classes intended for use in model
checking agent programs.
A full operational semantics for Gwendolen is presented in
[43]. Key components of a Gwendolen agent are a set, ,R of
beliefs which are ground first order formulae and a set, ,I of
selsh sEnglishsystem Englishis a controlled natural lan-
guage, i.e. a subset of standard English with meanings of sen-
tences ultimately defined by code in a high level programming
language such as MATLAB, Java or C++ [12]. It is natural lan-
guage programming made into an exact process. A correctly
formulated sEnglish text compiles into executable program
code unambiguously if predefined sentence structures
together with an ontology are defined. Errors in functionalityare reduced due to inherent verification mechanism of sEng-
lish upon build. This enables a programmer to enjoy the con-
venience of natural language while keeping with the usual
determinism of digital programs. Once a database of sen-
tences and ontologies have been generated, the clarity and
configurability of a project written in sEnglish becomes evi-
dent. Of particular interest, when applied to development of
an agent system, is the link between the abstract manner in
which sEnglish solutions are developed and the abstractions
of a real agent system. This enables shared understanding
between the computational system and its operator.
-
7/27/2019 Autonomous Asteroid Exploration
7/14
November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 31
intentions that are stacks of deeds associated
with some event. Deeds include the addition
or removal of beliefs, the establishment of new
goals, and the execution of primitive actions. A
Gwendolen agent may have several concurrent
intentions and will, by default, execute the first
deed on each intention stack in turn. It is possible to suspendan
intention until some condition is met, in which case no deed
on the stack is executed until the intention is unsuspended.
Gwendolen is event driven and events include the acquisition of
new beliefs (typically via perception), messages and goals.
Our implementation differs from the published Gwendolen
semantics by the addition of specialised deeds for interacting
with the Abstraction, Continuous and Physical Engine. This
permits the implementation of operational semantics for inter-
action between the components of our agent architecture as
specified in [1].2
C. Verification of the Gwendolen AgentModel checking of finite automata, as a means to verify
intended system operation, is an established technique used on
systems that may be expressed by logical models and enabled
by the fact that logical properties of bounded models are
decidable [44]. Considering a system S that is represented by
the executable logical model ,MS then a property specification
given in terms of a logical formula { may be used by a model
checker to establish if .MSt { This satisfaction requirement
may be computationally tested for all possibilities within MS
via exhaustive testing.
A variation on model checking is the model checking of pro-
grams. Instead of creating a model of the system, the modelchecking of programs performs exhaustive testing over the
actual codeand naturally hinges on the ability to determine all
executions of a program. This is feasible for programs written in
Java thanks to the existence of JavaPathfinder (JPF), which uses
a modified virtual machine to manipulate program executions
[45]. This tool has been used for the formal verification of
rational agents implemented in languages, such as Gwendolen,
which use the AIL, as detailed in [46].
Model checking is ideally suited for systems with a finite
number of discrete states; unfortunately hybrid systems embed-
ded within the real world are typically working in infinite and
continuous spaces. There is a large body of work in model-checking such systems which relies on analyzing the system to
identify specific regions that can be encapsulated as states often
driven by representing such systems as hybrid automata [47, 6, 7].
Unfortunately such models are not compositionali.e. it is not
possible to analyze such systems one component at a time in
2Gwendolen was also used to program the abstraction engine. Since the abstraction
engine mediates between the different timings of the other engines and, in particular,
was used to filter data received from the physical engine into that required for rea-soning, we also modified the Gwendolen semantics to speed up the processing of
perception (i.e., incoming data). In particular perceptions had to be added directly
into the agents belief base rather than being treated as events with plans used to con-vert them into beliefs. We do not, however, advocate the use of BDI languages for
programming abstraction engines, partly as a result of this issue, and so consider this aminor modification.
order to keep reasoning tractable, and so this rapidly leads to
issues of practicality. Our approach is to assume that while
model-checking of programs is a highly appropriate tool for the
analysis of the reasoning engine, it isnt necessarily, a suitable
approach for analyzing other parts of the system.
IV. Description of a Complex ExampleThe behavior of any given agent using the architecture pre-
sented within Figure 2 is governed by its rational decision mak-
ing processes and its capabilities that are encapsulated within the
,R X and P engines. A concrete agent is realized through the
development of these elements in such a way that their combi-
nation enables complex actions to be carr ied out. Whilst funda-mentally it is these elements that determine the capabilities of
an agent, abstraction and reification processes that occur within
the abstraction layer are necessary to form a responsive and
coherent agent, operating within the environment .E Here it is
intended to demonstrate and explore the application of the
agent system in a complex (asteroid) environment.
A. An Asteroid Exploration Agent
The mission considered involves the cooperative action of four
autonomous spacecrafts in an asteroid environment, tasked with
cataloging the asteroid numbers and composition. Only a small
subset of the asteroids present within the environment haveknown positions initially and are only observable if they are not
occluded by other asteroids. This entails operation in a partially
known environment and the need to develop enhanced
knowledge of this environment through cooperative action,
while performing other high level requirements. Primarily the
mission is one of scientific measurement and observation.
These observations include those taking place at a single point
in time, continuous monitoring of an object, and those that
require the cooperative actions of at least two spacecraft to
focus resources on a point of interest. Similarly, some of the
observations involve action in close proximity to an asteroid
and therefore some rational assessment of risk versus priority isrequired. Furthermore we specify that: there are over-arching
mission goals that can change dynamically based on data
received from the spacecraft group on mission; there are unex-
pected hazards such as energetic particle events and uncharted
asteroids, which may require the spacecraft to drop its current
goal and take evasive action; the spacecraft have different capa-
bilities because they have different equipment and these capa-
bilities can change dynamically as equipment breaks.
B. Agent CapabilitiesAgent capabilities are formulated using tags and sentences
in sEnglish where ultimately all meaning compiles into the
Arational agentselects its actions for a reason, itsdeliberation is driven by planning, logic inference andsometimes simple reflexes.
-
7/27/2019 Autonomous Asteroid Exploration
8/14
32 ieee ComputatioNal iNtelligeNCe magaziNe | November 2013
MATLAB development language. The result is a series of inter-connected m-files that are built from the user prescribed text,
linked to the core ontology that defines the conceptual data
structures used by the system.
First we illustrate the development of a medium level agent
skill that corresponds to the meaning tag following_tra-
jectory, resulting in procedural code to perform the func-
tion of following a specific trajectory. Considering the abstrac-
tions that may complete this control action, intuitively one
must obtain a target (kinematic) state, determine the current
system state, implement a control law to produce a control sig-
nal that will act to drive the error between the desired and
actual states to zero and then implement this control signal inhardware. Figure 3 illustrates the Eclipse based editor window
for the definition of a sentence Follow trajectory
Mytraj. with activity name following trajectory.
The following is a listing of a definition file (SEP-file = sEng-
lish Procedure file) to define the physical activity of following
a trajectory that may have been created by a mental activity of
the agent:
procedure name :: following trajectory
senglish sentences :: Follow trajectory
Mytraj.
process, repeat mode :: physproc, runOnce
input classes and local names::
trajectory[Mytraj]
output classes and local names::
senglish code :: Update the orbital target
state Statedes for trajectory Mytraj using
temporal progression. Obtain linear accelera-
tion Acc and angular velocity Om from the
gyroscope. Form the current state St and the
direction cosine matrix Dcm using vision
information and angular velocity Om. Generate
guidance potential function Sigma based upon
current state St and desired state Statedes.
Generate control signal U based upon current
state St and guidance potential function
Sigma. Determine the required thrust vector T
using the control signal U and direction
cosine matrix Dcm. Implement the required
thrust vector T on hardware.
matlab routines url:: www.sysbrain.org/astero-
craft
conceptual graph:: [I]-(action)-[following]-
(subj)-[trajectory:Mytraj]
testing formula:: Mytraj=rand_
object(trajectory);
following_trajectory(Mytraj);
section number :: 4
input defaults :: create_object(trajectory);
In Code Fragment 4.1 each elemental abstraction to an
senglish code, the high level abstraction of follow-
ing_trajectory may be completely defined in a
structured and meaningful way; its meaning is completely
and transparently described by the subsequent abstractions.
The meaning of sentences is exactly the senglish code
implemented for dealing with trajectory following and upon
compilation. The generated following trajectory.m
file will be of the format:
In the above development all meaning tags are names of
actions formed from verbs. At any time the current agentactivity may be queried to result in a meaningful response,
which is useful for a human operator querying the current
actions of an agent. A naive human operator may query the
system to ask what it is doing and why, with the response
being given in a natural language format, supplemented with
reasoned logic behind the execution of these actions. Each
sEnglish sentence is matched with a routine call in MAT-
LAB as well as with a similar looking predicate for logic
operations that abstracts away the (code based) meaning
behind the predicates. Basic predicate abstractions from sen-
tences are applied to both the world environment and the
spacecraft model; these are passed by the Physical Engine to
Code Fragment 4.1 The compiled high level MATLAB scriptthat represents the name tag following trajectory.
function following_trajectory (MyTraj) 1
StateDes = updating_target_state (MyTraj); 2
[Acc , Om] = obtaining_inertial_states; 3
[St ,Dcm] = forming_current_state (Acc, Om); 4
Sigm = generating_guidance_potential (St, StateDes); 5
U = generating_control_signal (St, Sigm); 6
T = determining_thrust_vector (U, Dcm); 7
implementing_thrust_vector (T); 8
Figure 3 Editing agent code in an sEnglish plug-in under Eclipse.
-
7/27/2019 Autonomous Asteroid Exploration
9/14
November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 33
the Abstraction Engine. The information
provided to the Abstraction Engine includes
the following (see Fig. 4).
poson Sys D
Data relevant to the operation of propulsion
system hardware is passed to the Abstraction
Engine, this information includes (but is not
limited to): pressure information (main pressure vessel and fuel
lines); valve activation status; and current/voltage data for inter-
nal systems.
Cono pfonc D
This relates to output control requests that are sent by the con-
trol system and actual output responses observed by onboard
systems. Differences in these may enable the agent to infer the
effectiveness of a particular control system and also to augment
investigations into faulty control hardware.
Knc D
High level kinematic state information is derived from the
onboard sensors that monitor the world environment and
abstracted into quantities relating to: orbital acquisition and
path following status.
pyod Ss
The status (health) of the agent payload, which in this case pri-
marily consists of sensors, is available to the Abstraction Engine.
asod Fd uds
The internal navigation/mapping system of the agent may flag tothe Abstraction Engine if a previously unknown asteroid is
detected within the field. Such observations entail the need to
check that the currently executing plans are still valid and that
they do not pose a threat with regards to colliding with the newly
detected body. Additionally, the details of newly detected bodies
should be broadcast to all agent members so that each agent may
plan with this enhanced knowledge of the local environment.
inbond So engc pc evn
Coronal mass ejection shock accelerated particles represent a
hazard to space operations as they damage hardware compo-
nents [48]. Inbound solar events are preceded by an enhance-ment of high energy particles that may theoretically be
detected. This information enables localized detection so an
agent may take action.
The Abstraction Engine, ,A is implemented using BDI
style plans. It further abstracts the data received from P and
sends it to the Rational Engine ( ) .R It also reifies instructions
from R which are passed on to X and . sAP l abstractions
include the following.
ths mfncon
A determines, from the Propulsion System Data, whether a
thruster is working or not and its current health.
pyod Hh
A determines the sensor payload operational status, which ulti-
mately determines the utility of the agent.
pow Sys Hh
A determines its power generation capabilities, which restrict
the capabilities of the agent.
sAl reifications closely match the P and X abilities previ-
ously described. In most cases A adds a few low level details
that are unimportant to the deliberations of the Rational
Engine and manages housekeeping related to communicationbetween the engines.
Adaptivity and hence the agents ability to complete a mis-
sion successfully, depends on its ability to monitor relevant
events that are vital for achieving its goals.
Although it is the set of low level abstractions that dictate
specific hardware actions, the agent is only concerned with
Some of the Self and Environment Perception Processes
Monitor Expected Course ofSelf-MovementMonitor Functioning of
Communications
Monitor Quality ofMeasurements
Check That MeasurementsAre Complete
Check for Remaining
Interesting Features ofCurrent Asteroid
Check That All Team Members
Completed TheirMeasurement Plans
Receive Communicationsfrom Other Agents
Monitor Solar Mass EjectionEvent
Monitor Collision Danger
Monitor Activity by OtherTeam Members
(~
) Moving on Course AsExpected
(~) Communications Work AsExpected
(~) Measurements Work AsExpected
(~) Completed AllMeasurement Plans
(~) No More Interesting
Features on Current Asteroid
(~) All Other Agents Completed
All Measurement Plans
(~) Working on InterpretingCommunication(~) Solar Mass Ejection Is
Expected
(~) Collision with Asteroid Ast IsExpected in ~300 s.
(~) Collision with Agent A IsExpected in ~100 s.
(~) All Collisions SafelyPreventable.
(~) All Agents Move AsPlanned.
(~) Agent A Moves As Planned.
sEnglish Sentence Boolean Beliefs
Figure 4 Illustration of perception processes which contribute to thebelief base during each reasoning cycle of the BDI agent.
Natural language programming in sEnglish hasproved to be a useful development tool to keepour system operations clear to programmers andeasier to validate.
-
7/27/2019 Autonomous Asteroid Exploration
10/14
34 ieee ComputatioNal iNtelligeNCe magaziNe | November 2013
activation of high level action abstractions. It is these abstrac-
tions that represent the set of P and X abilities to which the
agent has access to and may invoke as part of a plan or as a
means to augment reasoning ability.
:P physc abs
Each agent has the ability to control its physical hardware by
using abstract commands expressed in English sentences
(Fig. 5). In the instance of an autonomous spacecraft, this
relates to the ability to output required forces and torques
for desired motion. This entails interaction with multiple
systems at various levels of complexity: each agent has access
to discrete-time closed-loop control solutions and may
interact directly with the propulsion system to enable abili-
ties such as valve switching and power routing to enable
contingencies for failure. Sensor systems are assumed to be
available for the internal control routines. While the physicalagent body is controlled by appropriate force output, dam-
aged hardware systems may result in spurious force and
torque outputs.
:X mn abs
These abilities relate to complex tasks that are required to sup-
port reasoning with the R engine and control routines within
the P engine. R is interested in the outcomes of imple-
mented action and is concerned with the fact that specific con-
trol routines require specific data sets to be generated prior to
their implementation. It is also the agent that dictates specific
motion within the asteroid system by generation of non-inter-
secting trajectories to target destinations that the internal con-
trol systems are then directed to follow. Some of the mentalabilities are displayed in Fig. 6.
This set of P and X abilities are those which the Rational
Engine, ,R may utilise to perform reasoned mental or physical
actions. Reasoning itself is based upon abstract information that
is received by the agent. Abstraction is a two-stage process
within the agent architecture. The Physical Engine ( )P sends a
subset of the sensor data to the Abstraction Engine ( )A which
then filters, and in some cases further discretizes, the data based
on the current situation.
C. Agent Rationality
The executable plans of the agent are programmed in a declar-ative manner, linking conditioned event triggers to courses of
action through statements of the following generic format:
(!)Event : {Context}
{Plan};
#!
where Event is the acquisition of a new belief, message or
goal, {Context} is a predicate logic formula referring to
the beliefs and goals of the agent and {Plan} is a stack of
deeds to be executed which can consist of action predicates,
subgoal declarations, belief base changes, (un)lock and (un)sus-
pend commands (as discussed in section B). Abstract sensory
Physical and Communications Capabilities
Moving into a Kinetic
State Within aCoordinated Plan withOther Agents
Tracking a Trajectory
Moving Into a Movement
State Within aSelf-Created Plan
Broadcasting Insufficiencyof Your Six-DOFControl
Asking for VisionAssistance
Requesting Joint
SpectrographicObservations
RequestingCommunicationAssistance
(~) Succeeded Moving into Joint
Kinetic State, Moved into JointKinetic State with Error E
(~) Succeeds Following Trajectory,Follows Trajectory with State
Error E
(~) Succeeded Moving into Joint
Kinetic State, (~) Moved intoKinetic State with Error E
(~) Succeeded BroadcastingControl Insufficiency
(~) Managed to Ask for VisionAssistance
(~) Managed to Ask forCommunications Assistance
(~) Managed to Request Joint
Spectrographic Observations
Number States Possible Boolean State Outcomes
Figure 5 Some of the physical abilities: each agent is endowed withthe ability to control its physical and communications hardware.
Some Mental Abilities to Solve Problems
Reconfiguring Thrustersand Reaction Wheel
Allocation for MovementController
Planning JointSpectrographic
Observations
Discovering EarliestObservation Opportunities
Estimating Value of EarliestObservation Opportunities
Planning Approach to
Target
Turning Shield Toward Sun
Generating Trajectory
Generating EvasiveTrajectory
Selecting ObservationPoint
Predicting Object Motion
Evaluate Sensor Data
(~) Reconfigured Thrusters andReaction Wheels
(~) Planned JointSpectrographic Observations
of Asteroid Ast
(~) Discovered EarliestOpportune Asteroid Ast
(~) Estimated Value of EarliestObservation Opportunity
(~) Planned Approach to TargetAsteroid
(~) Turned Shield Toward Sun
(~) Generated RequiredTrajectory
(~) Generated RequiredTrajectory
(~) Managed to SelectObservation Point
(~) Managed to Predict ObjectMotion with Uncertainty 0.9
(~) Completed Evaluation ofSensor Data
Activity Tag Boolean State
Figure 6 Some of the mental capabilities: these relate to complextasks that are required to support reasoning with the R engine andcontrol routines within the P engine.
-
7/27/2019 Autonomous Asteroid Exploration
11/14
November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 35
perception, as provided by ,A is the agents
primary link to the environment and thus
dictates its action by furnishing the agent
with beliefs about the environment. Instances
of change within the belief base can give rise
to triggering events.
As an example, a relevant triggering event in this context, is
the detection of a previously unexplored asteroid. This will cre-
ate an intention with an empty deed stack.
If the intention is selected for attention (by default the
agent cycles through all its intentions in turn) then a plan
will be selected for handling the event. Once such plan
might be:
_ ( ):new asteroid Ast Busy #J+ " , 1
! _ ,planning orbit(Ast,P)+ 2
! _ ( , );orbiting asteroid Ast P+ 3
Here, upon receiving the percept relating to the detectionof a new asteroid, the agent checks against the internal
belief base entries that it is not currently busy, which is
itself a Prolog evaluation over aspects of the agents belief
base; if it passes the belief base check then the plan is a valid
method of dealing with the event and consequently the
intention to execute the plan body may be instantiated. If
the plan is selected then the plan body is placed upon the
deed stack for the intention and the agent proceeds
with execution.
Once the intention is selected the first deed on the stack is
executed, in this case the acquisition of a subgoal, +!plan-
ning_orbit(Ast, P). This generates a new event whichmay, in turn, trigger the selection of a plan such as:
! :planning_orbit(Ast,P) true #+ " , 1
. ( ;query planning_orbit(Ast,P)) 2
The .query command is an extension of the Gwendolen lan-
guage which communicates with the abstraction engine,
requesting calculations and suspends the execution of the
intention until the result is returned. In this case the abstraction
engine reifies the command planing_orbit(Ast, P) to a
call to the agent capability, planning_orbital_trajec-
tory, described in section B. X calculates a trajectory toasteroid, Ast, and returns the result which the abstraction
engine binds to P and then asserts as shared belief, plan-
ning_orbit(Ast,P) (i.e, that P is a suitable trajectory for
reaching asteroid, Ast). When the new shared belief is detected
by the Reasoning engine the intention unsuspends and contin-
ues execution with +! orbit asteroid (Ast, P), the plan for which
commands the physical engine to execute the trajectory now
bound to .P
The above describes the processing of a single new percept
to result in a specific action. As mentioned in Section B, the
agent is capable of dealing with multiple disparate and concur-
rent intentions that result in multiple concurrent actions: for
instance whilst following a specified trajectory, the agent may
deal with thruster malfunctions, communication processes and
high priority avoidance maneuvers.
The execution of the rational and abstraction engines are
based primarily upon the use ofplans for action and rules for
reasoning about facts. The rational engine has implemented
plans for the following:
asod Scon
R requests distance information from X for a selection of tar-
get asteroids, and negotiates with other agents to avoid multiple
agents surveying the same asteroid. R then instructs P to orbit
the selected asteroid.
ths r
R selects a suitable course of action to compensate for a dam-
aged thruster, based on the propulsion system data, and
instructs P to reconfigure its hardware appropriately.
Sonng asssnc
If R infers that sensor equipment is required that it does not
possess, it determines the closest agent with the correct equip-
ment and contacts it for assistance, and Prolog style reasoning
rules for:
Coss unxnd asod
Having requested distance information from ,X and possibly
received information from other spacecraft about theirintentions,
Operation Complexity of an Asteroid Explorer Agent with 10
Reasoning Cycle Per Second (RCPS)
Number ofPerception
Abstractions
557 3 1
Logic Rules of
Behavior
339 3 1
ProgrammedPlans of PhysicalCapabilities
183 2(6) 14377
Programmed
Plans of MentalCapabilities
128 6 292
ComponentCategory
Number ofAbstractions
MaximumDepth ofAbstraction
Hierarchy
Average Ratioof CompletionTime Relative
to Reas. Cycle
Figure 7 Summary of the complexity of the asteroid explorer agent.
The agent is capable of dealing with multiple disparateand concurrent intentions that result in multipleconcurrent actions.
-
7/27/2019 Autonomous Asteroid Exploration
12/14
36 ieee ComputatioNal iNtelligeNCe magaziNe | November 2013
R can determine the closest asteroid that no other spacecraft
intends to examine.
Coss SccfSimilarly, Rcan use information about other spacecrafts inten-
tions, capabilities and positions to select the closest spacecraft
with a required capability.
ths fs
In case of fuel line leak or thruster malfunction, R derives the
necessary actuator and control system reconfiguration.
1) Messaging and Negotiation
The Gwendolen programming language contains primitives for
sending and receiving messages. Our simulation did not con-
tain an accurate model for communication between satellites
but simply assumed that messages were reliably transmitted
between agents.
A number of protocols were implemented for agent negoti-
ation including a simple priority based protocol (i.e, each agent
had a different priority) and the auction protocols described in
[49]. In the auction case one agent was nominated as auction-
eer and equipped with the necessary plans for running the auc-
tion protocol In the priority case each agent yielded to a
higher priority agent if informed of that agents intention to
investigate an asteroid.
D. Lessons Learned
The need to pre-program a number of complex skills such
as discovering earliest observation opportunities, tracking
a trajectory or agreeing to planned approach to target
asteroid, required detailed dynamical analysis at the agent
programming level. The agent responds quickly to the vast
majority of environmental situations and the need for
onboard planning usingX
is rare during a mission. How-ever, the pre-programmed skills of the agent are not always
sufficient to solve a problem, for instance in the case of a
rare combination of hardware failures. For these situation
the agent uses planners in the continuous engine, ,X that
can be slower.
If insufficient care is taken to choose suitable hierarchical
abstractions for environmental situations then very long exe-
cution times can result for the formal verification. Verification
is not absolute: it is not possible to say that the mission is
guaranteed to succeed. The verification effort involves listing
the hardware and environmental situations, and proving the
agent makes appropriate choices in the face of these. The sys-tem designers task is to make it unlikely that an agents
actions will fail. This will involve the agent designers knowl-
edge of what can physically be anticipated in the environ-
ment, including onboard hardware failure. Note that this is
not as negative as it sounds: it is provably impossible to build
an engineering system that works in an unknown environ-
ment and never fails.
Overall the agent architecture makes a good compromise
between the use of pre-programmed agent solutions and
onboard planning on the fly. Describing the complexity of
environment for verification is beyond the scope of this paper
and is subject to our future investigations to improve effi-ciency of problem solving by our agents.
V. Implementation in Simulated EnvironmentOur system has been implemented in a simulated asteroid
scenario.
A. Software Implementation
The simulated asteroid environment has been implemented
using jBullet, a Java port of the Bullet Physics Library [50],
with VR output being performed using OpenGL. Collision
impacts between all system bodies may occur. Small impacts
to spacecraft may result in only a disturbance to their
LogicalReasoning
ComplementaryProcessing
(a)
(b)
Hardware
Simulation
Agent SkillAbstractionRepository
Dynamic Propagation,Collision Detection, and
Visualization
LogicalReasoning
ComplementaryProcessing
Hardware
Agent SkillAbstractionRepository
Dynamic Processes
Figure 8 The framework architecture for applying the presentedagent system completely within software (a) and on hardware (b).(a) Agent software system architecture immersed within a softwareenvironment to simulate the asteroid spacecraft mission, showingthe software components and their interaction(s). (b) Agent softwaresystem integrated into representative spacecraft hardware, showingthe replacement of simulation tools for hardware and realdynamic processes.
-
7/27/2019 Autonomous Asteroid Exploration
13/14
November 2013 | ieee ComputatioNal iNtelligeNCe magaziNe 37
trajectory, however fuel line ruptures, total
loss of control thruster(s), loss of sensor
payload and even complete agent loss may
result from major impact events. Hardware
failure may occur as a result of a collision
instance, it may also occur as a random
hardware glitch that the agent must also
be tolerant to.
The complete system is a multi-language software
system: the abstraction processes and agent reasoning are
performed in Java and the spacecraft hardware is modeled
in Simulink, which in turn uses skill abstractions developed
with sEnglish. The hardware actions are propagated within
the Java based asteroid environment; a schematic of
this software system and their interactions is given in
Figure 8(a).
The simulation was initialized with four spacecraft
agents distributed throughout a partially known asteroid
field, with the high level goal of cataloging all the observ-able asteroids. The spacecraft were able to negotiate (and
renegotiate) responsibilities for orbiting asteroids, correct
for thruster malfunctions, and operate in phased orbits of
several spacecraft around a single asteroid. This is shown in
Figure 9. During all operations, notification of an
approaching solar storm acted to override all current activ-
ities, forcing the spacecraft to seek shelter behind the clos-
est suitable asteroid.
B. Hardware Implementation
For application of the agent system on hardware, true hardware
processes and real dynamics replace the Simulink models andjBullet simulation used within the complete software applica-
tion. This integrated (hardware) agent system is shown in Fig-
ure 8(b) and the ground based hardware facility used is shown
within Figure 10.
On transferal of the agent system architecture to the hard-
ware system, there is a clear difference between interfacing true
hardware devices and those modeled within Simulink. Bridg-
ing this gap relates to enriching the sEnglish database to
include interface for the specific hardware devices being imple-
mented; neither the high level sEnglish performative abstrac-
tions, nor the agent reasoning code, were modified for the
hardware application.The test facility demonstrated the capability of the agent
architecture to perform aspects of the asteroid scenario, namely
motion to nominated points with disturbance compensation
and phased orbiting of a nominated point, through negotiation
with companion agents. Videos of some of these actions are
available to view at http://www.sheffield.ac.uk/
acse/staff/smv.
VI. Observations
This article has presented the application of a formally verifi-
able agent architecture, linked to a skill abstraction library for-
mulated in a natural language, within a complex simulation
environment and a representative hardware system. We sought
to explore the systems flexibility in an environment requiring
a high degree of autonomy, and its feasibility for real-time
implementation.
Encoding agent abilities through natural language abstrac-
tions resulted in an intuitive interface into system operation;
this is an innate advantage of the method used. In turn this
abstraction link to hardware and software processes provided a
clear link between reasoned decisions and output actions. This
resulted in a syntactically clear agent system, where its configu-
ration and subsequent operation is entirely transparent. The
abstraction method also facilitated expansion and modificationof agent capabilities without disturbing existing abilities, as was
evidenced with the transferal of the system to hardware; the
Figure 9 Screen capture of two spacecraft agents entering a phasedorbit about a nominated asteroid.
Figure 10 Image of the ground test facility and model frame space-craft robots.
5DOF spacecraft models were used to testoperations complemented by virtual reality simulationsin jBullet.
-
7/27/2019 Autonomous Asteroid Exploration
14/14
only change related to the addition of low level sEnglish
abstractions to interface specific hardware systems; existing
code, inclusive of both reasoning and higher level action per-
formatives, remained unaltered.
In the software scenario, the agent was provided with a
minimal set of physical/mental skills and reasoning processes,
yet was able to survey a section of a partially known asteroid
field in the presence of internal hardware failures and while
reacting to dynamic hazards.
References[1] L. A. Dennis, M. Fisher, N. Lincoln, A. Lisitsa, and S. M. Veres, Declarative abst rac-tions for agent based hybrid control systems, in Proc. 8th Int. Workshop Declarative Agent
Languages Technologies, 2010, vol . 6619, pp. 96111.[2] N. Lincoln, S. M. Veres, L. A. Dennis, M. Fisher, and A. Lisitsa, An agent based
framework for adaptive control and decision making of autonomous vehicles, in Proc.
IFAC Workshop Adaptation Learning Control Signal Processing, 2010, pp. 310317.[3] S. M. Veres and N. K. Lincoln, Testbed for satellite formation fly ing control system
verification, in Proc. AIAA InfoTech Conf., Rohnert Park, CA, 2007.
[4] JAXA. Hayabusa spacecraft [Online]. Available: http://www.isas.ac.jp/e/enterp/mis-sions/hayabusa
[5] NASA. Dawn spacecraft [Onl ine]. Available: http://dawn.jpl.nasa.gov
[6] R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger, P.-H. Ho, X. Nicollin,A. Olivero, J. Sifakis, and S. Yovine, The algorithmic analysis of hybrid systems, Theor.
Comput. Sci., vol. 138, no. 1, pp. 334, 1995.
[7] T. A. Henzinger, The theory of hybrid automata, in Proc. Int. Symp. Logic Computer
Science, IEEE Computer Society Pre ss, 1996, pp. 278292.
[8] M. J. Wooldridge,An Introduc tion to MultiAgent Systems. New York: Wiley, 2002.[9] M. Kloetzer and C. Belta, A fully automated framework for control of linear sys-
tems from temporal logic specifications, IEEE Trans. Autom. Contr., vol. 53, no. 1, pp.
287297, 2 008.[10] I. A. Ferguson, TouringMachines: Autonomous agents with attitudes, Computer,
vol. 25, no. 5, pp. 5155, 1992.
[11] R. H. Bordini, M. Dastani, J. Dix, and A. E. Fallah-Seghrouchni, Eds. Multi-AgentProgramming: Languages, Tools and Applications. New York: Springer -Verlag, 2009.
[12] S. M. Veres, Natural Language Programming of Agents and Robotic Devices: Publishing for
Humans and Machines in sEnglish. London, U.K.: SysBrain, 2008.[13] R. H. Bordin i, J. F. Hbner, and M. J. Wooldridge. Programming Multi-Agent Systems
in AgentSpeak Using Jason. New York: Wiley, 2007.
[14] J. P. Mller, The Design of Intelligent Agents. New York: Springer-Verlag, 1996.[15] E. Gat, On three-layer architectures, in Artif icial Intel ligenc e and Mobile Robot s,
D. Kortenkamp, R. P. Bonnasso, and R. Murphy, Eds. Menlo Park, CA: AAAI Press,1997, pp. 195210.
[16] J. P. Mller, M. Pischel, and M. Thiel, Modeling reactive behaviour in vertically
layered agent architectures, in Proc. Workshop Agent Theories, Architectures, L anguages, vol.
890, pp. 261276, 1995.[17] CLARAty. NASA jet propulsion laboratory [Onl ine]. Available: http://claraty.jpl.nasa.gov[18] K. Fregene, D. C. Kennedy, and D. W. L. Wang, Toward a systems- and
control-oriented agent framework, IEEE Trans. Syst. Man, Cybern. B, vol. 35, no. 5,
pp. 9991012, 2005.[19] J. Lygeros, K. H. Johansson, S. N. Sim ic, J. Zhang, and S. S. Sastry, Dynamical
properties of hybrid automata, IEEE Trans. Autom. Contr., vol. 48, no. 1, pp. 217,
2003.[20] A. Mohammed and U. Furbach, Multi-agent systems: Modeling and verifica-
tion using hybrid automata, in Proc. 7th Int. Conf. Programming Multi-Agent Systems,pp. 4966, 2009.
[21] R. Alur, T. A. Henzinger, G. Lafferriere, and G. J. Pappas, Discrete ab stractions of
hybrid systems, Proc. IEEE, pp. 971984, 2000.
[22] L. Molnar and S. M. Veres, Hybrid automata dicretising
agents for formal modelling of robots, in Proc. 18th IFAC WorldCongr., 2011, vol. 18, pp. 4954.
[23] MATLAB Programming Environment, Simulink, Stateflow and
the Realtime Workshop, MathWorks, Natick, MA.[24] A. E. Fallah-Seghrouchni, I. Degirmenciyan-Cartault, and
F. Marc, Framework for multi-agent planning based on hybrid
automata, in Proc. 3rd Central and Eastern European Conf. Multi-Agent Systems, 2003, pp. 226235.
[25] R. J. Firby, Adaptive execution in complex dynamic
worlds, Yale Univ., New Haven, CT, Tech. Rep., 1990.[26] I. A. Ferguson, Toward an architecture for adaptive, rational, mobile agents, in
Proc. 3rd European Workshop Modelling Autonomous Agents in a MultiAgent World, 1991, pp.249261.
[27] M. Wooldridge and N. R. Jennings, Intelligent agents: Theory and practice,
Knowl. Eng. Rev., vol. 10, no. 2, pp. 115152, 1995.
[28] A. S. Rao and M. Georgeff, BDI agents: From theory to practice, in Proc. 1st Inter-
national Conf. Multi-Agent Systems, pp. 312319, San Francisco, CA, 1995.
[29] H. V. D. Parunak, A. D. Baker, and S. J. Clark, The AARIA agent architecture:
From manufacturing requirements to agent-based system design,J. Integr. Comput.-Aid ed
Eng., vol. 8, no. 1, pp. 4558, 2001.
[30] M. E. Bratman, Intention, Plans and Practical Reason. Stanford, CA: CSLI Publica-
tions, 1987.
[31] F. Bellifemine, G. Caire, and D. Greenwood, Developing Multi-Agent Systems with
JADE. New York: Wiley, 2007.
[32] A. Pokahr, L. Braubach, and W. Lamersdorf, Jadex: Implementing a BDI-infra-
structure for JADE agents, EXPSear. Innov. (Special Issue on JADE), vol. 3, no. 3, pp.
7685, Sept. 2003.[33] N. Howden, R. Rnnquist, A. Hodgson, and A. Lucas, JACK intelligent
agents Summary of an agent infrastructure, in Proc. 5th Int. Conf. AutonomousAgents , 2001.[34] M. P. Georgeff and A. L. Lansky, Reactive reasoning and planning, in Proc. 6th Nat.
Conf. Artificial Intelligence, pp. 677682, 1987.
[35] F. F. Ingrand, M. P. Georgeff, and A. S. Rao, An architecture for real-time reasoning
and system control, IEEE Expert, vol. 7, no. 6, pp. 3444, 1992.[36] NASA jet propulsion laboratory. Mis sion data system [Onl ine]. Available: http://mds.
jpl.na sa.gov/publ ic/
[37] N. Muscettola, P. P. Nayak, B. Pell, a nd B. Will iams, Remote agent: To boldlygo where no AI system has gone before, Art if. Int ell. , vol. 103, nos. 12, pp. 548,
1998.
[38] D. E. Bernard, G. A. Dorais, C. Fry, B. Kanefsky, J. Kurien, W. Millar, N. Muscet-tola, U. Nayak, B. Pell, K. Rajan, N. Rouquette, B. Smith, and B. C. William s, Design
of the remote agent experi ment for spacecraft autonomy, in Proc. IEEE Aerospace Conf.,1998, pp. 259281.
[39] L. A. Dennis, M. Fisher, N. Lincoln, A. Lisitsa, and S. M. Veres, Reducing code
complexity in hybrid control systems , in Proc. 10th Int. Symp. Artificial Intelligence, RoboticsAutomati on Space (i -Sairas), 2010, pp. 16.
[40] J.-H. Kim, I.-W. Park, and S. A. Zaheer, Intelligence technology for robots that
think, IEEE Comput. Intell. Mag., vol. 8, no. 3, pp. 7084, 2013.[41] SysBrain Ltd. sEnglish publisher [Online]. Available: http://www.systemenglish .org
[42] S. M. Veres, Theoretical foundations of natural language programming and pub-
lishing for intelligent agents and robots, in Proc. 11th Conf. Towards Autonomous RoboticSystems, 2010.
[43] L. A. Dennis and B. Farwer, Gwendolen: A BDI language for verifiable agents,
in Logic and the Simulation of Interaction and Reasoning, B. Lwe, Ed. Aberdeen, U.K.:AISB, 2008.
[44] G. J. Holzmann, The Spin Model Checker: Primer and Reference Manual. Reading, MA:Addison-Wesley, 2003.
[45] W. Visser, K. Havelund, G. P. Brat, S. Park, and F. Lerda, Model checking pro-
grams,Autom. Sof tw. Eng., vol. 10, no. 2, pp. 203232, 2003.[46] L. A. Dennis, M. Fisher, M. Webster, and R. H. Bordini, Model checking agent
programming languages,Autom. Sof tw. Eng., vol. 19, no. 1, pp. 563, 2012.
[47] R. Alur, C. Courcoubetis, T. Henzinger, and P. Ho, Hybrid automata: Analgorithmic approach to the specification and verification of hybrid systems, in
Hybrid Systems (Lecture Notes in Computer Science Series, vol. 736), R. Gross-
man, A. Nerode, A. Ravn, and H. Rischel, Eds. New York: Springer-Verlag, 1993,pp. 209229.
[48] J. Feynman and S. B. Gabriel, On space weather consequences and prediction, J.Geophys. Res., vol. 105, no. A5, pp. 1054310564, 2000.
[49] D. P. Bertsekas, Auction algorithms for network flow problems: A tutorial i ntroduc-
tion, Comput. Optim. Appl., vol. 1, no. 1, pp. 766, 1992.[50] JBullet. JBulletJava port of bullet physics library [Online]. Available: http://
jbu ll etadve l.cz
The anthropomorphic programming paradigm weused enabled us to create a computationally verycomplex intelligent system to parallel humanpiloting capabilities.