a software framework for the advanced technology solar telescope steve wampler
TRANSCRIPT
A software framework for the
Advanced Technology Solar
Telescope
Steve Wampler
What is ATST?
•7X photon collecting power
•Unvignetted light path
•Prime focus heatstop/occultor
•Integrated 1300+ actuator AO
•Rotating Coudé lab
•70TB/day collected, 5TB/day
delivered
•Haleakala, Maui, Hawaii
Advanced Technology Solar
Telescope
Observatory SW Trends
• Away from ‘custom’ toward:– Commodity hardware (PC), distributed systems
– Commodity OS (Linux, FreeBSD, etc.)
– Common Software infrastructure
– COTS/Community communication middleware
– Standard software models (tiered, separation of
technical and functional architectures, etc.)
– Common high-level models and architectures
Common Software
• Common framework for all observatory software– ALMA ACS is excellent example– ATSTCS draws from general ACS model
• Much more than just a set of libraries• Separates technical and functional architectures• Provides bulk of technical architecture through
Container/Component model• Allows application developers to focus on functionality• Provides consistency of use• System becomes easier to maintain and manage
Communication Middleware
• Avoids need to write communication infrastructure in-house– Less effort– Less in-house expertise required– Access to outside expertise– Benefit from wide-spread use
• Often provides rich set of features
• Supports actions required to run in a distributed environment
• Lots to choose from (both commercial and community)
What’s wrong with Middleware?
• 900-lb gorilla:– Promotes dependency on small set of vendors
– Hard to control direction
– Typically deeply integrated into common services
– Difficult to change once integrated (lots to choose from, but
once you choose, you’re stuck)
– Sometimes obsolete before deployment
– Too feature-rich (Kid-In-Candy-Store syndrome)
• Adopting standards (e.g. CORBA) instead of specific
packages can help, but:– Standards not particularly agile, often incomplete
– Can still get stuck as technology advances
ATST Common Software
ATST Containers and Components
ComponentsComponents
Container
Lifecycle control Service access
interfaces
Custom interface
User code
ATST Components
IRemote ILifeCycle
IController
Container Component Controller
IFunctional
IContainer IComponent
Technical InterfacesTechnical Interfaces
Functional InterfacesFunctional Interfaces• Narrow functional interfaces
– IComponent has just two commands: get, set– IController has six commands: get, set, submit, pause, resume, and cancel
• Controller implements “command/action/response” model– Actions execute asynchronously from command submission (no blocking)– Supports multiple simultaneous actions– Supports scheduled actions
• These classes implement technical interfaces, subclasses add functionality. e.g.
doSubmit, doAction
Many services available
Service Layers
• Service tools• Component-specific data
• All knowledge of service implementation isolated by the respective Service Tool
• Tool chaining keeps tools small and focused
• Uniform access from Components
• Designed for use, not implementation• Bridge between functional and technical
Containers/Components
Toolbox Loaders
Toolbox
Loader
Palate of Service Tools
Toolbox
Shared Private
Ex: Toolbox Loader
An Example: Event Service
ContainersetToolBoxLoader(“atst.cs.ice.IceToolBoxLoader”);
setToolBoxLoader(“atst.cs.jaco.JacoToolBoxLoader”);
ComponentEvent.post(eventName, eventMessage);
ToolboxeventTool.post(getTimestamp(), appName, eventName, eventMessage);
ICE
Event Service
Tool
JacORB
Event Service
Tool
An Example: Log Service
ComponentLog.warn(“M2 temp overload);
ToolboxlogTool.log(getTimestamp(), appName, “warning”, message);
Buffered DB
Log Service
Tool
Post
Log Service
Tool
Log Service
Tool
• Three service tools chained
How well is ATSTCS working?
• Approach seems successful (early alpha-release supported both CORBA and ICE)– Choice of service tool has no impact on component
development – can select between CORBA/ICE at runtime– Already helped in unexpected ways (licensing conflicts)
• ICE/CORBA service tools easy to write:– Both similar architectures– Both align well with access helper models
• Less aligned services likely to be harder– Changes are well isolated at service tool layer, however
• 3rd party implementation of TCS simulation
• Developer code “simpler”
Ex: Controller Simulator
Event system performance
• Three dual-core machines (source, target, and event server), GbE network,
CentOS4
• Two versions of ICE: 2.1 and 3.2 (CORBA OmniNotify/TAO much slower)
• 1,000,000 events, ~120 bytes each
• All Java (JDK 1.6) code (C++ and Python currently much slower)
• Two service tools tested (unbatched and batched), now combined into a
single tool (per-event stream batching), both fully stable
Log service performance
• All dual-core machines (source, logdb server), CentOS4, GbE
• PostgreSQL version 8.2.4, JDK 1.6
• 500,000 messages, two sizes of payload
• end-to-end performance (client to database)
• Buffered and unbuffered service tools tested
– Multi-buffering to handle message spikes
But wait, there’s more…
• Toolboxes themselves can be chained, so higher-level application layers can delegate management of specialized services to ATSTCS (e.g. DHS header service and data distribution service)
• Service tool chains can be changed dynamically (DHS Quick-look Probe)
• Can help with system migration as software technology advances and to help integrate legacy systems:
– E.g. (carefully) chain ICE and CORBA tools together– Reduces effects of ‘big bang’ system upgrades when
core infrastructure changes