platformdefinition impax data center (idc)3.1

44
Document Status: Approved Livelink ID: 55694328 Version: 5 Platform Definition 1/42 Livelink Node ID: 55694328 AGFA INTERNAL DOCUMENT Printed copies are not controlled and shall be verified on the electronic document management system Template 46865842, v.4 (template revision history at 39703795) Agfa HealthCare Planning document Platform Definition IMPAX Data Center (IDC) 3.1.1 LiveLink NodeID 55694328 Author/Responsible (Owner) PD&V Representative Dirk Somers Mandatory Reviewers Product Manager Stephen Mallory Development Representative Charles Gieringer, Verification Representative Sue Sing-Judge, Peter Libbrecht Optional Reviewers QARA Representative Chip Beckner Documentation Representative Jody Frederick Architecture Nikolas Boel, Kevin Eagles (acting) Program Manager/Project Manager Leyton Collins, Nadia De Paepe, Hendrik Plaetinck, PD&V representatives Bart Domzy, David Burton, David MacLean, Dean Robertson, Dieter Van Bogaert, James Vanderleeuw, Mark Brown, Mike Letherbarrow Service Design Dan Dubois, Tim Fisher Approvers Product Manager Stephen Mallory Development Representative Charles Gieringer Verification Representative Sue Sing-Judge, Peter Libbrecht PD&V Manager Dirk Somers

Upload: others

Post on 08-Dec-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 1/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

A g f a H e a l t h C a r e P l a n n i n g d o c u m e n t

Platform Definition

IMPAX Data Center (IDC) 3.1.1

LiveLink NodeID 55694328

Author/Responsible (Owner)

PD&V Representative Dirk Somers

Mandatory Reviewers

Product Manager Stephen Mallory

Development Representative Charles Gieringer,

Verification Representative Sue Sing-Judge, Peter Libbrecht

Optional Reviewers

QARA Representative Chip Beckner

Documentation Representative Jody Frederick

Architecture Nikolas Boel, Kevin Eagles (acting)

Program Manager/Project Manager Leyton Collins, Nadia De Paepe, Hendrik Plaetinck,

PD&V representatives Bart Domzy, David Burton, David MacLean, Dean Robertson, Dieter Van Bogaert, James Vanderleeuw, Mark Brown, Mike Letherbarrow

Service Design Dan Dubois, Tim Fisher

Approvers

Product Manager Stephen Mallory

Development Representative Charles Gieringer

Verification Representative Sue Sing-Judge, Peter Libbrecht

PD&V Manager Dirk Somers

Page 2: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 2/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

CONTENT

1 Introduction .......................................................................................................... 5

1.1 Purpose ........................................................................................................................5

1.2 Scope............................................................................................................................5

1.3 Platform Definition Process............................................................................................5

1.4 Definitions, Acronyms, and Abbreviations ......................................................................6

2 Product Description ............................................................................................... 7

2.1 Deployment options: New Installs ..................................................................................7

2.2 Deployment options: Updates / Upgrades.......................................................................7

3 Hypervisor and Virtualization Requirements........................................................... 9

3.1 Virtualization/Hypervisor layer .....................................................................................93.1.1 VMware vSphere Hypervisor specifications ........................................................93.1.2 Microsoft HyperV..............................................................................................93.1.3 Other Hypervisors.............................................................................................93.1.4 Virtualized Cloud offerings................................................................................9

4 Operating System Requirements .......................................................................... 11

4.1 LINUX.........................................................................................................................11

4.2 Microsoft Windows OS – not supported for IDC ............................................................11

4.3 SPARC Solaris 10 ........................................................................................................11

4.4 Other Operating Systems – not supported.....................................................................12

5 VMware Requirements for IDC DB servers ............................................................ 13

5.1 VMware Guest VM for IDC DB servers ..........................................................................135.1.1 VM & OS Specifications...................................................................................135.1.2 Disk Partitioning.............................................................................................135.1.3 Network specifications....................................................................................15

5.2 VMware Guest VM for the IDC ILM DB Server...............................................................15

5.3 VMware Guest VM – IDC DB deployed as ARR-only.......................................................155.3.1 VM & OS Specifications...................................................................................155.3.2 Disk Partitioning.............................................................................................165.3.1 Network specifications....................................................................................17

5.4 Database as part of XDS Thin Proxy deployments..........................................................17

6 VMware requirements for IDC SAS servers............................................................ 18

6.1 VMware Guest VM for VNA Storage Application Servers (SAS)......................................186.1.1 VM & OS Specifications...................................................................................186.1.2 Disk Partitioning.............................................................................................186.1.1 Network specifications....................................................................................19

6.2 VMware Guest VM for IDC ILM Storage Application Server ...........................................206.2.1 VM & OS Specifications...................................................................................206.2.2 Disk Partitioning.............................................................................................206.2.3 Network specifications....................................................................................20

6.3 VMware Guest VM – SAS - ARR-only deployment..........................................................21

6.4 VMware Guest VM –SAS as XDS thin proxy deployment ................................................21

Page 3: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 3/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

7 Hardware Requirements ...................................................................................... 22

7.1 SPARC/Solaris HW requirements for IDC DB servers.....................................................227.1.1 HW & OS Specifications ..................................................................................227.1.2 Disk Partitioning.............................................................................................227.1.1 Network specifications....................................................................................23

7.2 Minimum x86 Hardware Requirements........................................................................237.2.1 IDC Standalone Server – minimum specs for new installs..................................237.2.2 Consolidated platform - new or existing installs................................................24

8 Database Disk Partitioning ................................................................................... 25

8.1 General design notes ...................................................................................................258.1.1 Performance design notes: ..............................................................................258.1.2 Data protection / redundancy design notes: .....................................................25

8.2 Database sizing ...........................................................................................................25

8.3 Database partitioning ..................................................................................................26

9 Server Sizing & Scaling ........................................................................................ 29

9.1 Overview ....................................................................................................................29

9.2 General Concepts for scaling........................................................................................299.2.1 Notes on VM sizing: Rightsizing.......................................................................299.2.2 Notes on Horizontal vs. Vertical scaling (scaling in/out) ...................................30

9.3 IDC 3.1.1 DB CPU sizing & scaling................................................................................31

9.4 IDC 3.1.1 SAS CPU sizing & scaling ..............................................................................31

10 Storage Requirements.......................................................................................... 32

10.1 Storage Tier specifications: ..........................................................................................32

10.2 Tier -1 IDC Database storage........................................................................................33

10.3 Tier-2 On-Line cache storage........................................................................................33

10.4 Tier-3 storage..............................................................................................................33

10.5 Tier-4 Near-Line archive ..............................................................................................34

11 Networking Requirements.................................................................................... 35

11.1 Topology ....................................................................................................................3511.1.1 Front-end network:.........................................................................................3611.1.2 Back-end networks: ........................................................................................36

11.2 Hypervisor network layout...........................................................................................36

11.3 Load Balancer .............................................................................................................36

11.4 Multicast addressing: ..................................................................................................37

11.5 Physical Network layout Example ................................................................................3811.5.1 HP DL385p Server ..........................................................................................3811.5.2 Oracle SPARC server.......................................................................................38

12 3rd Party Software Requirements .......................................................................... 39

12.1 OS version ..................................................................................................................39

12.2 Oracle SPARC/Solaris OS version – DB servers only ......................................................39

12.3 Database software.......................................................................................................39

12.4 Application Middleware ..............................................................................................40

Page 4: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 4/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

12.5 Other..........................................................................................................................40

13 References........................................................................................................... 41

14 Revision History................................................................................................... 42

Page 5: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 5/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

1 Introduction

1.1 Purpose

The purpose of this document is to define the specifications of a reference platform for IMPAX Data Center (IDC) 3.1.1.

This document details a reference platform. A reference platform within the context of this document details a base platform that this product should be developed and tested against. The reference platform impacts the manufacturing cost and the performance of the final product.

This document does not intend to detail legacy platforms but the base recommended platform for this version of the product. Where possible, alternatives to major components (if already qualified) will be suggested for manufacturing and production related purposes.

Important Note:PDD’s are AGFA HE internal documents: use the Solution Architecture Workbooks from Bid Support for customer specifications to align with the customer’s specific project scope and solution design.

1.2 Scope

The content of this document focuses on the hardware, operating system, virtualization and necessary software including explicit third party software components of the product platform for commercial resale.

This PDD defines the specifications of a reference platform for IDC 3.1.1 as a Guest VM deployed on a VMware ESXi Hypervisor (Server).

For deployments in a Customer Managed Environment, please refer to the section related to the Guest VM only (sections VMware Requirements for IDC DB servers and VMware requirements for IDC SAS servers).

Additional standard Agfa platforms may be used as long as the configuration meets the requirements for the Guest VM.

1.3 Platform Definition Process

This Platform Definition is created as described in the ‘Global Procedure - Software Architecture and Design’ (IMS: 35704099).

The lifecycle and purpose of this Platform Definition document (PDD) is two-fold.

The PDD starts as an Initial Platform Definition Document:

The Initial Platform Definition is prepared as early in the project as possible to support the design of the product and the preparation of V&V planning. It needs to be reviewed and approved prior to final V&V testing. Input is expected from the product Architecture teams to align the platform definition with the intended functionality and market segment.

Page 6: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 6/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

During the project Initiation Phase and early design phases, the PDD document has the status of Initial Platform Definition Document. In this status, the document provides the specifications of a Reference Platform for the Solution configuration for new installations. The reference platform within the context of this document details a base platform that this product will be developed and tested against.

In later phases of the project - and during the project life cycle - the Platform Definition Document may have to be updated and versioned in preparation for the Transfer for Sales. It will continue to be updated and versioned after the development project ends and post-release updates begin.

IDC 3.1.1 Platform Definition/Design Document status - LLID 55694328Version Date Document StatusV1 12-7-2016 Platform Definition Document

V3 14-9-2016 Platform Design Document

1.4 Definitions, Acronyms, and Abbreviations

Agfa HealthCare terminology and abbreviations can be found in the Agfa HealthCare glossary in IMS: http://ims.agfa.net/he/en/intranet/ims/overview.jsp?ID=12462977

Def/Abr. Full formCRS Commercial Requirement SpecificationsEoL End of LifeEoPL End of Product LifeEoSL End of Service LifePAP Product Availability PlanParavirtual VMware’s Virtual Storage ControllerRDM Raw Device Mappings (VMware) RDP Remote Desktop protocol (Microsoft)SAS Software Architecture Specification

Page 7: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 7/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

2 Product Description

IMPAX Data Center (IDC) is a vendor neutral archive (VNA) that consists of a Database Servercomponent (DB), a Storage Application Server component (SAS) and an Information Lifecycle Management Server component (ILM).

The DB and SAS components constitute the core of the VNA solution. The ILM server is deployed as an add-on to the VNA solution in a dedicated (VM) server;

The Audit Record Repository (ARR) is a variation of the standard VNA deployment method when deployed separately and is used for purposes of ATNA message record keeping. It consists of a DB ARR-only component and a SAS ARR-only component. The resource requirements for these components are different from the core DB and SAS components.

Please refer to the requirements document for a more detailed description of the product: IDC Cycle 1 – LLID 42481958 (this document applies to EI 8.0.x VNA as well) IDC 3.1.1 - IDC VNA Requirements Specifications - LLID 53947833

2.1 Deployment options: New Installs

New projects are deployed on the reference x86/LINUX hardware platform as specified below – or equivalent. The solution can be delivered …

as a turnkey solution, including hardware, software and services

or the solution can be delivered on customer managed and owned infrastructure, in a software and services only model. In this scenario, the hardware has to meet the minimum specifications as provided below.

The product can be deployed in following ways:

as a VNA, with Database (DB) and Storage Application Server (SAS) components

as an ILM server, with combined on the IDC DB and a dedicated ILM SAS server

as a dedicated ARR solution, in a small footprint version of the software with dedicated DB and SAS components

as a dedicated XDS Proxy configuration, with dedicated DB and SAS components

2.2 Deployment options: Updates / Upgrades

New installs: LINUX only (DB and SAS)Historically, IDC has been delivered and supported on SPARC Solaris platforms (IDC 3.0and earlier versions). As of IDC 3.1.1/Enterprise Imaging VNA 8.0, only the X86/LINUX platform is to be used for new deployments.

Upgrades: DB on SolarisHowever, customers upgrading to IDC 3.1.1 with the IDC Database running on a Solaris platform can maintain the Solaris DB (assuming the HW is not End of Service Life -- EoSL). Further details on the supported upgrade paths are identified in section 2.1.1 in the IDC 3.1.1 Project Contract at LL Node ID 53416994.

Page 8: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 8/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Upgrades: SAS (or SAS and DB) on LINUXOtherwise, SPARC hardware (including SASs) will need to be replaced by X86 as per specifications in the chapters below.

Page 9: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 9/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

3 Hypervisor and Virtualization Requirements

This section defines the requirements of the underlying hypervisor and virtualization layer.

3.1 Virtualization/Hypervisor layer

This section defines the requirements of the underlying hypervisor. Detailed information on vSphere Guest OS compatibility and VM supported HW versions can be found in the VMware design guide LLID 46072203

Reference for VMware Lifecycle Information: http://www.vmware.com/files/pdf/support/Product-Lifecycle-Matrix.pdf

3.1.1 VMware vSphere Hypervisor specifications

For server deployments of IDC on VM hypervisor

Reference version: IDC 3.0 (a.k.a. EI VNA 8.0): VMware ESXi 5.5 u2 IDC 3.1.1: VMware ESXi 5.5 u2 or 6.0u1(*)

(*) tested against both reference versions;Refer also to the VMware 6 assessment and design document for VMware 5/6 equivalence – LLID 55201488 and LLID 55238076

Minimum version: VMware ESXi 5.0u2 or higher, supporting VM H/W version 8(to remain compatible with ESXi 5.0)

Virtual Format and size Open Virtual Appliance (OVA) file format deployment Generic manual installation

VM HW version VMware VM Hardware Version 8 (to maintain compatibility with ESXi 5.0 )

Boot media Internal Hard Disk drives / SHDC Flash Card

3.1.2 Microsoft HyperV

HyperV is explicitly identified as a non-supported platform for IDC 3.1.1 (and/or EI VNA 8.0).

3.1.3 Other Hypervisors

Other hypervisors than specified above are not validated nor supported by AGFA.

3.1.4 Virtualized Cloud offerings

A global program on Cloud providers and offerings is WiP (procurement). Further Information will be provided as part of this Cloud Strategy Initiative as it becomes available. No specific validations or testing against Cloud offerings: form, fit, functionality is covered in testing the product against a virtual reference platform (VMware).

IMPORTANT NOTE - Based on field experience & escalations: When Cloud offering are considered, Cloud providers usually do not provide visibility to the underlying hardware, but describe the offering in the form of virtual CPU’s (vCPU’s), virtual memory (vMEM), Storage capacity (MB, TB), number of machines (VM’s) and Service Levels.

Page 10: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 10/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Please explicitly mention the expectations related to the Hardware requirements, such as physical servers (x86 compatibility), associated CPU & MEM resources, Oracle licensing policies based on #Cores, disk Tiering requirements, high IOPS requirements, Bandwidths, etc.

The average general purpose Cloud offerings on shared infrastructure do not satisfy the needs for the high IO requirements for an IDC (or Enterprise Imaging) solution. At minimum, ask for isolated and dedicated servers or blades for the AGFA HE solution.

Page 11: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 11/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

4 Operating System Requirements

IDC and its subcomponents is built for the below mentioned versions of Operating System (OS)

4.1 LINUX

IDC 3.1.1 and other Enterprise Imaging components on LINUX (i.e. Web Server), DB, Core Server) are validated for the following versions:

Table 1 - LINUX OS specifications

OS version Oracle Linux 6.7 (64 bit version) or later – “Golden LINUX” reference version

Note: next Golden LINUX refresh to be expected October 2016

Table 2 Golden Image support tableVersion LINUX

versionComments

EI VNA 8.0(IDC 3.0)

OL 6.7 Sept 1st – 2015 release – rfr. LLID 51563623

OL 6.7 Update April 30th 2016 – for < EI 8.0.1 SU5

OL 6.8 Update July 29th 2016 - v4 kernel as of 8.0.1 SU6

IDC 3.1.1 OL 6.7 Nov 13th release - rfr. LLID 52272506

Update Feb 29th 2016 – v3 kernel - LLID 53858675

Update April 30th 2016 - v4 kernel for DoD STIG’s

Future:t.b.d.

OL 6.8 Update July 29th 2016 - v4 kernelin line with DoD STIG

OL 6.8 Next refresh expected Oct 2016

Note on the Golden LINUX OS versions:

IDC 3.0/EI VNA 8.0 is released on the Golden LINUX of Sept. 1st 2015 as reference baseline. This can be patched up with the delta patch upgrades to later versions.

IDC 3.1.1 is developed on the Feb. 29th OL 6.7 LINUX version and can be patched up to later OL 6.7 versions.

Delta Patch Procedure:The Delta Patch Procedure describes the LINUX OS upgrade from the Sept 1st. 2015 Baseline Version OL 6.7 Sept. 1st release can be patched up to Nov. 13th release. – No forklift OS upgrade required (i.e. no new deployment): Rfr. LLID 52371681 – OL 6.7 Delta Patching procedure.

When upgrading from IDC 3.0/EI VNA 8.0.x to IDC 3.1.1, the OS must be upgraded by the Delta Patch process as described in. LLID 52371681 – OL 6.7 Delta Patching procedure.

The Golden LINUX image is a managed build and a refresh is expected on a quarterly basis; the refreshed versions will include latest security patches.

4.2 Microsoft Windows OS – not supported for IDC

IDC does not work and is not validated for Windows.

4.3 SPARC Solaris 10

For backward compatibility on SPARC platforms, SPARC Solaris remains supported for the IDC database for now.

Page 12: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 12/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Table 3 – SPARC / Solaris OS specifications

OS Reference version

SPARC Solaris 10 (64bit) – update NN

For IDC 3.1.1 Database only Not for IDC SAS

NOT supported Solaris 11 is explicitly not supported!

Although binary compatible, SPARC Solaris 11 is not supported due to the fundamental deployment and operational differences, and the required organizational readiness to support such.

4.4 Other Operating Systems – not supported

Other Operating Systems are not validated and not supported.

Page 13: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 13/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

5 VMware Requirements for IDC DB servers

This chapter describes the specifications of the VM’s for the LINUX based IDC 3.1.1 deployments.The hardware requirements are described in Chapter 7 - Hardware Requirements

5.1 VMware Guest VM for IDC DB servers

The IDC solution consists of a single database server (DB server) and one or more Storage Application Server (SAS server). This section describes the Linux based DB servers in their different deployment models.

5.1.1 VM & OS Specifications

Table 4 - OS and Resource specifications for IDC DB on Linux:

OS version Oracle Linux (OL) – Golden LINUX image as described higher

Virtualization layer VMware Hypervisor – as per specifications section 3 - Hypervisor and Virtualization Requirements

Virtual Processor MINIMUM SPECIFICATION: 4 vCPUs (rfr. sizing table in section 9- Server Sizing & Scaling)

Virtual RAM MINIMUM SPECIFICATION: 12 GB vRAM including TMPFS! rfr. sizing table in section 9- Server Sizing & Scaling

Default size of the Golden LINUX OVA is 8GB: refer to the Golden LINUX Build document on how to resize according IDC specifications at time of deployment (or later)

Video Card Memory

32 MB RAM (video mem RAM required for Full HD RDP support)

5.1.2 Disk Partitioning

Table 5 – IDC DB OS Disk specifications

Virtual Storage Controller

VMware Paravirtual

Virtual Hard Drive space OS

VMDK1+VMDK2

Default: IDC VM OS Dedicated LUNs – Tier-1/RAID10(for Small, VM’s on Tier-2/RAID-5 is acceptable, as per VM spec)

Golden LINUX OS default partitioning

Notes:1. as per VM design - the VM data store can host multiple VM’s – no dedicated LUN required

per VM2. The VMDK’s in the datastores should never be thin-provisioned.

Table 6 - VMDK-1 (drive sda in LINUX)VolumeLUN

Drive –Mount Point

Size FS Type Comment

/dev/sda7 / 32 GB EXT4 EI VNA 8.0 versions

21 GB IDC 3.1.1 versions

tmpfs /dev/shm 8 GB See note

/dev/sda1 /boot 1 GB EXT4 IDC 3.1.1

Page 14: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 14/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

/dev/sda2 /var 12 GB EXT4

/dev/sda5 /var/log 3.9 GB EXT4

/dev/sda6 /var/log/audit 3.9 GB EXT4

Total VMDK size: ~= 42GB (EI 8.1) or ~ 53 GB (EI 8.0)

Additional: Swap size: depending on MEM size – see rules below

(in the datastore , separate from VMDK1) calculate with 12.5% VM overhead within the VMFS

Note 1: TMPFS is residing in MEM and therefore depending on available MEM size. Provision for 8 GB.

Note 2: System uses swap space when it has not enough physical memory to store the code and data.

If MEM < 2 GB, swap size = 1.5 x MEM If MEM > 2 GB, < 16 GB: swap size = MEM size. If MEM > 16GB , swap = 16GB

Note 1: TMPFS is residing in MEM and therefore depending on available MEM size. Provision for 8 GB.

Note 3:RAM should be adequately sized to support TEMPFS, and leave sufficient headroom for the application processes! Ref. table in section ” “Error! Reference source not found. - Error! Reference source not found..”

Table 7 - VMDK-2 (drive sdb in LINUX)Volume Drive –

Mount PointSize FS Type Comment

/dev/sdb1 /home 2 GB EXT4

/dev/sdb2 /install 18 GB EXT4 For 8.0

/opt 30 GB For 8.1

Total VMDK2 size: ~ 32GB (IDC 3.1.1) or ~ 20GB (IDC 3.1 / EI VNA 8.0)

Example:

Figure 1 - EI 8.1 Partitioning and usage (Golden LINUX 29-02-2016)

AdditionalVirtual Storagecontroller

VMware Paravirtual (for connection to RDM)

External harddrive space forDATABASE

Refer to section 8- Database Disk PartitioningTier-1 (block IO) storage for the Oracle Database files – rfr VMDK2 above.(required External Hard Drive space is depending on workload volume; rfr . database calculator LL 27517010)

Page 15: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 15/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

5.1.3 Network specifications

Table 8 – IDC 3.1.1 DB on LINUX: Network specifications

Virtual Network interfaces

Default: Reference network design (1) (2)

1 x 1 GigE VMXNET3 NIC – PRODUCTION (front-end) vLAN’s 1 x 1 GigE VMXNET3 NICs - STORAGE vLAN’s (when NAS required, f.e.

for DB backup)

Note: the ADMIN network is not presented to the VM, only to the physical server

Minimum: 1 x 1 GigE VMXNET3 NIC

(1) Depending on customers network implementation in combination with network storage.(2) Rfr. to the Networking Requirements section below for additional info.

5.2 VMware Guest VM for the IDC ILM DB Server

The ILM DB deploys on the same database instance as the IDC DB.The estimated extra required capacity for extra ILM information is marginal: ILM takes the “Enterprise Imaging schema” for rule management, but there is no data stored. Additional rules configuration is minimal, and there is only limited amount of transient queue data.

See specifications in section 5.1 - VMware Guest VM for IDC DB servers.

5.3 VMware Guest VM – IDC DB deployed as ARR-only

(Note: IDC DB as Audit Record Repository-only is out of scope for the overall Enterprise Imaging Suite; explanation below is only valid for IDC / Enterprise Imaging VNA solutions).

Existing customer deployments with their IDC configured as an ARR will not upgrade to IDC 3.1.1. Their next upgrade point will be VNA 8.1. The Enterprise Imaging VNA 8.1 product can be deployed as a dedicated Audit Record Repository (ARR)-only setup. Specifications below:

5.3.1 VM & OS Specifications

Table 9 – OS and Resource specifications for IDC ARR-only on LINUX

OS version Oracle Linux (OL) – Golden LINUX image as described higher

Virtualization layer VMware Hypervisor – as per specifications section 3 - Hypervisor and Virtualization Requirements

Virtual Processor MINIMUM SPECIFICATION: 4 vCPUs (rfr. sizing table in section 9- Server Sizing & Scaling)

Virtual RAM MINIMUM SPECIFICATION: 12 GB vRAM Including TMPFS! rfr. sizing table in section 9- Server Sizing & Scaling

Video Card Memory

32 MB RAM (video mem RAM required for Full HD RDP support)

Page 16: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 16/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

5.3.2 Disk Partitioning

Table 10 - Disk specifications for IDC DB as ARR-only

Virtual Storage Controller

VMware Paravirtual

Virtual Hard Drive space OSVMDK1 & VMDK2

Identical to the standard Golden LINUX OS partitioning - rfr section 5.1.2 -Disk Partitioning

Page 17: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 17/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Virtual Hard Drive space for DATABASE

VMDK3

External dedicated LUN – storage Tier-1 / RAID-10 – Block IO (Note: the database file can never be smaller than 80 GB )

One size fits all! Table 11 - partitioning for the VNA ARR database

Volume Drive – Mount Point Size FS Type

/dev/sdc1/ /data 250(+) GB EXT3

AdditionalVirtual Storagecontroller

VMware Paravirtual (For connection to RDM)

5.3.1 Network specifications

Table 12 IDC 3.1.1 ARR DB on LINUX: Network specifications

Virtual Network interfaces

Default: Reference network design (1) (2)

1 x 1 GigE VMXNET3 NIC – PRODUCTION (front-end) vLAN’s 1 x 1 GigE VMXNET3 NICs - STORAGE vLAN’s (when NAS required, f.e.

for DB backup)

Note: the ADMIN network is not presented to the VM, only to the physical server

Minimum: 1 x 1 GigE VMXNET3 NIC

(1) Depending on customers network implementation in combination with network storage.(2) Rfr. to the Networking Requirements section below for additional info.

Note 1: the database is not supported on a NAS network - block IO only – rfr. VMDK3 above.

Note 2: above configuration describes the virtual NIC’s: the underlying physical infrastructure is specified in the network design document.

5.4 Database as part of XDS Thin Proxy deployments

This configuration is maintained for historic purposes, related to XDS consumer.The XDS manifests are cached in temporary online storage (image data pass through for registration).

Same configuration as for the IDC DB deployed as ARR only – refer to section 5.3 - VMware Guest VM – IDC DB deployed as ARR-only.

Page 18: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 18/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

6 VMware requirements for IDC SAS servers

6.1 VMware Guest VM for VNA Storage Application Servers (SAS)

The VNA solution consists of a database server (VNA DB server) and one or more VNA Storage Application Server (VNA SAS server). This section describes the SAS servers.

6.1.1 VM & OS Specifications

Table 13 – OS and resource specifications for IDC SAS server

OS version Oracle Linux (OL) – Golden LINUX image as described higher

Virtualization layer VMware Hypervisor – as per specifications section 3 - Hypervisor and Virtualization Requirements

Virtual Processor MINIMUM SPECIFICATION: 2 vCPUs (default setting) (rfr. sizing table in section 9- Server Sizing & Scaling

Virtual RAM Virtual RAM requirements depend strongly on the settings of the Retrieve Cache option:By default, prior studies are retrieved into TEMP space in memory of the SAS server(/tmp/priorscache/tmpfs). This data is volatile data. The Prior Retrieve location can be “externalized” on external (NAS) storage, which will reduce minimum required RAM size, and allows for sharing the Prior retrieve cache between SAS servers (setting of TEMP_ONLINE_STORAGE).

Table 14 - IDC SAS virtual MEM tableRetrieve cache option: Virtual RAM configurationInternal RAM based TEMP cache (default):

12 GB RAM (minimum/default)16 GB RAM Recommended!

External File System based Retrieve Cache – (NAS)

8 GB RAM (minimum)12 GB RAM (default)

refer to the Golden LINUX Build document on how to resize according IDC specifications at time of deployment (or later)

Video Card Memory

32 MB RAM (video mem RAM required for Full HD RDP support)

6.1.2 Disk Partitioning

Table 15 – Disk Specifications for IDC SAS server

Virtual Storage Controller

VMware Paravirtual

Virtual Hard Drive space OS

VMDK1 + VMDK2

Default: IDC VM OS Dedicated LUNs – Tier-1/RAID10(for Small, VM’s on Tier-2/RAID-5 is acceptable, as per VM spec)

Identical to the standard Golden LINUX OS partitioning - refer section 5.1.2 - Disk Partitioning.

Page 19: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 19/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Virtual Hard Drive space

VMDK3:

WADO Cache on Tier-2 (block IO) storage / RAID-5(or optionally on internal disks )

Note: it is advised not to use WADO cache on VNA, in favor of using WADO cache on XERO for additional authentication. However, WADO cache on VNA is still required for proper functionality.

Table 16 - VNA SAS WADO2cache – default sizeVolume Drive – Mount Point Size FS Type

/dev/sdb1 /wado2cache 32 GB EXT3

Total VMDK2 size: 32 GBTotal VMFS size: 40 GB = VM + VMware Overhead

External Storage

NAS

Table 17 IDC/VNA online cacheVolume Mount Point Size FS TypeOnline cache /cache001 XXX GB NFSv3 Storage Tier 2

… /cacheNNN XXX GB …

Table 18 – IDC/VNA nearline cacheVolume Mount Point Size FS Type

Nearline caches /archive001 XXX GB NFS v3 Storage Tier 4… /archiveNNN XXX GB …

Optional: depending on Retrieve Cache options (rfr. explanation above)TEMP_ONLINE_STORAGE is set as a subdirectory of the shared online cache volumes. As such it is part of the online DICOM image cache on storage Tier-2:

Volume Mount Point Size FS TypeTEMPONLINE /cache001/temponline N.A. Part of online cache

6.1.1 Network specifications

Table 19 - IDC 3.1.1 SAS on LINUX: Network specifications

Virtual Network interfaces

Default/Minimum: Reference network design (1) (2)

1 x 1 GigE VMXNET3 NIC – PRODUCTION (front-end) vLAN’s 1 x 1 GigE VMXNET3 NICs - STORAGE vLAN’s

Note: the ADMIN network is not presented to the VM, only to the physical server

1) Depending on customers network implementation in combination with network storage.

2) Rfr. to the Networking Requirements section below for additional info on Enterprise Imaging network design requirements and extra required NIC’s!EI 8.0.x Network Design Document - LLID 49659008

3) The DICOM image on-line cache and nearline archive locations MUST reside on a fast dedicated storage (v)LAN!

4) Above configuration describes the virtual NIC’s: the underlying physical infrastructure is specified in the network design document.

Page 20: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 20/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

6.2 VMware Guest VM for IDC ILM Storage Application Server

The ILM Server is always a separate server required for Information Lifecycle Management (ILM)functionality. The ILM server connects to the same IDC database, provided that this database is in UTF-8 format. If not (for older legacy systems), a separate ILM DB server is required.

6.2.1 VM & OS Specifications

Table 20 - OS and Resource specifications for IDC ILM SAS server on Linux:

OS version Oracle Linux (OL) – Golden LINUX image as described higher

Virtualization layer VMware Hypervisor – as per specifications section 3 - Hypervisor and Virtualization Requirements

Virtual Processor 4 vCPUs (default setting) (rfr. sizing table in section 9- Server Sizing & Scaling)

Virtual RAM 8 GB vRAM (default setting) including TMPFS! rfr. sizing table in section 9- Server Sizing & Scaling

Video Card Memory

32 MB RAM (video mem RAM required for Full HD RDP support)

6.2.2 Disk Partitioning

Table 21 - ILM SAS server specifications

Virtual Storage Controller

VMware Paravirtual

Virtual Hard Drive space

VMDK1 + VMDK2

Default: IDC VM OS Dedicated LUNs – Tier-1/RAID10(for Small, VM’s on Tier-2/RAID-5 is acceptable, as per VM spec)

Identical to the standard Golden LINUX OS partitioning - refer section 5.1.2 - Disk Partitioning.

Virtual Hard Drive space

VMDK3:

N.A. - no WADO cache for the ILM server

6.2.3 Network specifications

Table 22 - IDC 3.1.1 ILM SAS on LINUX: Network specifications

Virtual Network interfaces

Default/Minimum: Reference network design (1) (2)

1 x 1 GigE VMXNET3 NIC – PRODUCTION (front-end) vLAN’s 1 x 1 GigE VMXNET3 NICs - STORAGE vLAN’s

Note: the ADMIN network is not presented to the VM, only to the physical server

1) Depending on customers network implementation in combination with network storage.

2) the vNIC VMXNET 3 interface for storage is not required for ILM functionality: ILM does not access the storage.

3) Rfr. to the Networking Requirements section below for additional info on Enterprise Imaging network design requirements and extra required NIC’s!EI 8.0.x Network Design Document - LLID 49659008

4) Above configuration describes the virtual NIC’s: the underlying physical infrastructure is specified in the network design document.

Page 21: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 21/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

6.3 VMware Guest VM – SAS - ARR-only deployment

In previous versions of IDC, the Storage Application Server (SAS) was sometimes deployed in ARR-only mode. This configuration is not supported on IDC 3.1.1 so those customer deployments should remain on the current version or upgrade to EI VNA 8.1.

6.4 VMware Guest VM –SAS as XDS thin proxy deployment

The Enterprise Imaging VNA SAS server for XDS deployments is identical to the standard VNA SAS server, with exception of the DICOM image caches: these are not required for XDS functionality.

Refer to section 6 - VMware requirements for IDC SAS servers

VMware Guest VM for VNA Storage Application Servers (SAS).

Page 22: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 22/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

7 Hardware Requirements

7.1 SPARC/Solaris HW requirements for IDC DB servers

The IDC solution consists of a single database server (DB server) and one or more Storage Application Server (SAS server). This section describes the specifications for a SPARC Solaris based DB server – for upgrades of existing installations only. No new deployments on SPARC Solaris.

NOTE that the release strategy defined for these upgrades is that for any existing customer deployments below the minimum hardware threshold should transition to Linux.

IDC SAS will be (re)deployed for Solaris on LINUX.

7.1.1 HW & OS Specifications

Table 23 - IDC DB (Solaris) specifications:

System Oracle T2 Plus, T3 or T4 CPU class Server (T5120/T5140, T3-1/T3-2, T4-1/T4-2)

Number & Details of CPU’s

1 x UltraSPARC T2 Plus, T3 or T4 CPU (e.g. T3-1: 8-core 128 thread)

Internal disks 4 * 146 GB min. (ZFS 3 way mirror + HS)

OS version Solaris 10 (64-bit)

In case of Solaris 11 supported HW only: Customer managed hardware platform LDOM hosting a Solaris 10 global zone, AGFA managed. Solaris 10 non-global zone within the Solaris 10 global zone LDOM

Note: as the LDOM virtualization is shielding the application from any Solaris 11 impact, validation on Solaris 10 on native hardware is equivalent to Solaris 10 in an LDOM.

Processors(cpu threads)

16 vCPUs (SPARC threads) - default setting (rfr. sizing table in section 9- Server Sizing & Scaling)

Virtual RAM 16 GB RAM - MIMIMUM 64 GB RAM - RECOMMENDED 128 GB RAM for large projects (+1Mio Ex./yr) )

- including TMPFS filesystem in MEM! (rfr. sizing table in section 9- Server Sizing & Scaling)

Video n.a. - “headless configuration”, remote management via RSC card

(on the Control and Service LDOM n case of SPARC T7/Sol11)

7.1.2 Disk Partitioning

Table 24 - IDC 3.1.1 on SPARC/Solaris: Disk partitioning specification

OS Partitioning OS on internal disks ZFS 3-way mirror

Page 23: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 23/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Tier-1 storage / RAID-10 for UFS ZFS – no further partitioning

RAID 3-way Z Mirror for ZFS or (RAID 1 with HotSpare for UFS)

ZFS is the preferred file system for the OS disks

HBA for External Storage

Redundant FC Storage interface: 1 * HBA dual port 8GbFor Database Block IO access.

External harddrive space forDATABASE

Refer to section 8- Database Disk PartitioningTier-1 (block IO) storage for the Oracle Database files – rfr VMDK2 above.(required External Hard Drive space is depending on workload volume; rfr . database calculator LL 27517010)

7.1.1 Network specifications

Table 25 - IDC 3.1.1 DB on SPARC/Solaris: Network specifications

Network interfaces Reference configuration: 4 x 1Gb onboard NICs 4 x 1Gbps add-on NICs 1 x RSC (remote support card management interface)

Configuration: 2 x 1Gbps for Hospital production/front-end network, 2 x 1Gbps for back-end network (external storage) 2 x 1Gbps for MGMT network (for HW/VMware mgmt.)

1) Depending on customers network implementation in combination with network storage.

2) Refer to the EI 8.0.x Network Design document - LLID 49659008 - and to the Networking Requirements section below for additional info on Enterprise Imaging network design requirements and extra required NIC’s!

Network Interfaces for LDOM

Minimum: (default setting)

1 NIC for Hospital PRODUCTION/front-end network, 1 NIC for STORAGE / back-end network (external storage/NAS)Management network resides in the Control and Service Domains –customer managed HW

1) Depending on customers network implementation in combination with network storage.

2) Rfr. to the Networking Requirements section below for additional info on EnterpriseImaging network design requirements and extra required NIC’s! EI 8.0.x Network Design Document – LL ID 49659008

Note 1: the database is not supported on a NAS network - block IO only – rfr. VMDK2 above.

Note 2: above configuration describes the virtual NIC’s: the underlying physical infrastructure is specified in the network design document.

7.2 Minimum x86 Hardware Requirements

7.2.1 IDC Standalone Server – minimum specs for new installs

For a complete list of x86 based server platforms, see LL Workspace 30953328.For customers that do not wish to install the software into a managed VMware environment, the following table describes the stand-alone server specifications:

Page 24: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 24/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

System HP DL385 Gen8 LL Node ID 37857141 or Gen9 at LL Node ID 47775757

Number & Details of CPU’s Minimal 10 cores ( rfr to the EI 8.0.x Master PDD for sizing details)

RAM Minimal 64GB RAM (NN * 8GB)

Hard Drive space SHDC Flash CardInternal HD: 4 * 300GB 10K SAS when using local HD for VM

RAID Embedded (internal storage) RAID 10 solution

Network interfaces Reference configuration: 4 x 1Gb onboard NICs 4 x 1Gbps add-on NICs 1 x iLO (integrated Lights Out management interface)

Configuration: 2 x 1Gbps for Hospital production/front-end network, 2 x 1Gbps for back-end network (external storage) 2 x 1Gbps for MGMT network (for HW/VMware mgmt.)

1) Depending on customers network implementation in combination with network storage.

2) Refer to the EI 8.0.x Network Design document - LLID 49659008 - and to the Networking Requirements section below for additional info on Enterprise Imaging network design requirements and extra required NIC’s!

7.2.2 Consolidated platform - new or existing installs

A reference server consolidation platform would be based on the HP DL360/DL385p Gen8 platform with specifications as described in the HP DL385p and DL360 Platform Design Document.This is for customers with existing infrastructure. New deployments will very likely be ordering Gen9 or Gen10 servers. (Reference: LL 37857141).

The IDC 3.1.1 / EI 8.0.x VNA solution can be installed on a dedicated hardware platform (see section 5.1.2 - New installs - Minimum HW Specs for Standalone Server), or can be integrated with other AGFA HE Enterprise Imaging solution components in a consolidated hardware platform (customer provided or AGFA provided). In the latter case, sufficient resources need to be available as per VM requirement specifications mentioned above.

Refer to the section 9 - Server Sizing & Scaling for sizing details.

For a complete list of x86 based server platforms, see LL Workspace 30953328.

HP DL385p Gen8 6344 - Server BoM can be found here: LL 42874896.

Page 25: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 25/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

8 Database Disk Partitioning

Below Database partitioning specifications apply as well to the x86-LINUX platform, as to the legacy SPARC Solaris platform. The partition specifications are according Oracle’s best practices for a correct performance design.

8.1 General design notes

In the ideal scenario and database disk layout, the following rules are obeyed.

8.1.1 Performance design notes:

1. For correct performance, the database partitions shall reside on Tier-1 / Block IO storage (i.e. SAN or DAS, never NFS).

2. The database shall be split over different LUN’s / file systems the OS layer (Windows/LINUX/Solaris) handles the IO requests more efficiently when the database is spread over multiple file systems.Note: as disks are usually combined in larger disk groups, the different LUN’s do not necessarily need to reside on different spindles (i.e. “horizontal LUN provisioning within a large disk group is acceptable”).

3. Keep DATA and INDEX partition sizes always < 1 TB max! Multiple smaller partitions provide higher performance than 1 large partition. When more database capacity is required, add DATA / INDEX partitions as needed.

8.1.2 Data protection / redundancy design notes:

4. The backup area shall reside on a different set of disk spindles than the database files - not just on a different LUN on the same set of spindles (i.e not only as a separate LUN in the same disk group as the database data files).

5. Maintain redundant redo areas (i.e redoA and redoB) on different disk spindles. Although this duplicates the writes, it adds redundancy. Note that – if redo and redob areas do reside on the same disk sets (as example: just as two different directories on the same LUN), there is no added redundancy and only the negative aspect of doubling the writes.

8.2 Database sizing

The database size and growth is depending on various parameters.

Initial number of legacy studies (from migrations, etc.). Initial size based upon number of migrated reports if applicable Use of scanned requests – or not, Are report versions stored in database as PDF reports or not? ATNA audit record repository needs.

Page 26: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 26/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Table 26 – Average database study growthStored database object Average size

Study structure (Order/Study/Patient) 250 KB / StudyScanned Request used 30 KB – 175 KB/ StudyReporting in IDC (average size report = xxx) 60 KB / StudyATNA records Estimate: 500K/study over 5 yrs. <t.b.c.>

The database calculator provides the estimated sizing – depending on the workload volume:Refer to LLID 27517010.

8.3 Database partitioning

Table 27 – IDC DB partitioning on VM on LINUX

IDC DATABASELUN’s onLINUX

VMDK3-…(sdc – sd… )

Database LUN’s

Dedicated LUNs – minimum storage Tier-1/RAID-10.On external SAN, block IO – NEVER NFS!

Pre-requisites: The partitions must exist prior to the installation.

Refer to the database Calculator for estimated sizing – depending on the workload volume: rfr. LLID 27517010

Small sites only: dedicated single LUN, block IO – minimum storage Tier-1 / RAID-10(Note: the database file can never be smaller than 100 GB )

Table 28 - small sites IDC database partitioning (<150K) :Volume Drive – Mount Point Size FS Type

/dev/sdc1/ /data X GB EXT3

Page 27: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 27/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Table 29 – IDC DB Partitioning - bare metal and VMVolume

LUN

(system dependent!)

DriveMount Point

/dbase/…

SizeS/M sites

<500K

SizeL/XL sites< 1 Mio

FS Type

/dev/sdc1/ /data (1) XX GB XX GB EXT3/4(3)

/index(1) n.a. <t.b.d.>?

/dev/sdc2/(2) /redoa 2 GB (3) 16 GB (4) sdc for S/M (1) ,

EXT3/4 (3)

/dev/sdc3/(2) /redob 2 GB (3) 16 GB (4) sdc for S/M (1)

EXT3/4 (3)

redob located on a different LUN as per Oracle best practice

/rbs n.a.

/system n.a. 25 GB Guideline only

/arch n.a. Historic IDC -not for EI DB

/flashback n.a.

Total VMFS size required: GB = VMDK + Mandatory Free space

(1) Performance design notes: Each partition on a separate LUN / separate file systems as per Oracle’s best

practices.

Size depends on DB calculation rules (rfr. DB Calculator – LLID 56087552) Keep DATA and INDEX partition sizes always < 1 TB max!

(2) Naming convention:LUN names: sdc / sdd / sde:

For L/XL/XXL sites, use different LUN’s on different spindle sets for the database partitions; this will be represented in different disk sets names (i.e. sdc1 , sdd1, sde1 …)

For S sites with fixed configurations, disk layout is also fixed and is different than a layout on SAN – refer to the specific platform definitions for Standalone, Standalone Premium and Single Server in the Master PDD.

(3) File System Type: EXT3 vs. EXT4: For new deployments, use EXT4

For upgrades of existing deployments on EXT3, no reformatting/newfs required; keep EXT3

(4) Redo logfile size: To correctly support Monarch migrations, recommended to have each redo logifle

sized to 500MB, @4 logfiles per directory; therefore by default set to min. 2GB for new deployments; test systems can use smaller redo sizing (500GB) if needed.

For L / XL sites, calculate with the advertised cumbers as first guidance.

Virtual Hard Drive space for Database backup

VMDK-4 (sdd):

Dedicated LUN – minimum storage Tier-2 /RAID-5On external SAN

Table 30 - VMDK-4 (drive sdd in LINUX) - Bare metal and VMVolumeLUN

Drive –Mount Point

Size FSType

Comment

/dev/sdd1 (1)

/backup XX GB3x DB size

At least 50GBfree! (3)

EXT3/ EXT4

Oracle backup & archive logs; Size depends on DB calculation rules (rfr. DB Calculator – LLID 56087552)

Page 28: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 28/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Total VMFS size required: GB = VMDK + Mandatory Free space

Notes:(1) partition names might differ(3) Ensure to have at least 50GB to facilitate fast upgrades in the future.

NOTE: to avoid the EI DB root filesystem to run full in case of disk problems:Enterprise Imaging expects the directories to be available as per definition above. However, in case of disk problems (DB partitions not mounted), the root filesystem might risk to run full. To mitigate this risk, the Oracle DB directories can be mounted on their respective /dbase/XXX directories, with symbolic links pointing from the expected locations to this new mount point in /dbase as follows (identical as in IMPAX software):

# cd /dbase# mkdir data redoa redob index undo system backup

# ln -s /dbase/backup /backup# ln -s /dbase/data /data # ln -s /dbase/redoa /redoa # ln -s /dbase/redob /redob# ln -s /dbase/index /index # optional - for L and XL systems only# ln -s /dbase/undo /undo # optional - for L and XL systems only# ln -s /dbase/system /system # optional - for L & XL systems only

Page 29: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 29/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

9 Server Sizing & Scaling

9.1 Overview

Table 31 – IDC Sizing overviewComponent Size DescriptionDB server Separate IDC DB server

Scales up by adding resources (CPU, Mem) Oracle 11.2.0.4 /12.1.0.2 ESL or ASFU license VM, or physical machine for large configurations.

Single SAS Server

< 500.000 Studies/Yr

Separate DB and (single) SAS server Archiving to local and remote disk archives, DICOM

archives (AGFA and 3rd party), Object storage, cloud archives, …

Enterprise viewing directly from the archive by adding EI XERO Viewer / CWP

Dual SASServers

< 1 MioStudies/Yr

… Identical features as above, and in addition: Dual SAS server configuration Hardware Load balancing across SAS servers supported

Triple SAS servers

< 3Mio Studies/Yr.

… Identical features as above. Maximum of 3 SAS server configuration supported out of

the box.CUSTOM > 3.5Mio

Studies/Yr.… Identical features as above. 4 SAS server configuration requires attention on

supersizing aspects ( storage configuration! Rfr. LLID 45364393

EI Archiving to all supported destinations (HSM volumes, local near-line archive volumes, remote near-line archive volumes, DICOM destination via DICOM Store & Remember and Object Storage, etc.).

Near-line archive volumes can grow without limitation provided that the storage grows with the archive needs.

9.2 General Concepts for scaling

9.2.1 Notes on VM sizing: Rightsizing

The performance of a VM vSphere cluster is at its best when the resource size of the VM is right-sized:

Assigning too low amount of CPU/Mem/IO resources to a VM will become a performance bottleneck, and negatively impacts overall application performance (CPU/MEM consumption continuously between 90- 100%, high latencies on disk and network IO, etc.)

Assigning too many resources to a VM (i.e. 16 cores instead of 4-8 cores…) will have an equal adversary effect on performance and unneeded overhead on physical resources scheduling: all the required physical CPU’s/MEM needs to be freed-up before this (oversized) VM can be scheduled for processing; this causes other machines to be blocked for resources, and performance bottleneck as consequence.

Below resource specifications is a starting point for hardware capacity planning:

Page 30: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 30/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

CPU/MEM/IO measurements in the field will further indicate if smaller (downsizing) adjustments can be made to optimize performance vs. resource investment.

Note that the cost of MEMORY is very low.Note that the cost of server/CPU might be considered low compared to escalation interventions and bad user experience.

9.2.2 Notes on Horizontal vs. Vertical scaling (scaling in/out)

Horizontal scaling:

Sizing and scaling of the IDC SAS server is a “Horizontal Scaling” process: scale up to handle more workload by adding another IDC SAS server VM.

Synonyms: horizontal scaling / out-of-the-box scaling / scaling out

Vertical Scaling:

For the IDC Database Server, sizing and scaling is a “Vertical Scaling” process: scale up to handle more workload by assigning more resources (CPU/MEM/IO…) to the same IDC DB VM.

Synonyms: vertical scaling / in-the-box scaling / scaling in.

Terminology What Diagram CommentsVertical scaling Adding more compute

resources (CPU/Mem) to the same VM to increase scalability.Applies to the IDC database.

Figure 2- ref. LLID 50471511

Synonyms:scaling up, scaling in the box

Horizontal scaling

Increase scalability by adding more VM’s of the same component

Applies to the IDC SAS server

Figure 3 - ref. LLID 50471511

Synonyms: Scaling out

For IDC SAS Servers: Near linear scalability

up to 3 SAS servers Between 3-4:

gradually declining > 5 Core Servers:

limited extra capacity, but useful for increased load spreading.

Horizontal + Vertical scaling

A combination of Horizontal + Vertical scaling to scale further up

.

Page 31: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 31/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

9.3 IDC 3.1.1 DB CPU sizing & scaling

Table 32 – IDC DB - VMware sizing table for x86 LINUXHospital

size[exams/yr]

vCPU[#]

MEM[#]

Remarks

< 75 XS 4 16

< 150 S 4 24

< 350 M 6 48

< 500 L 8 64

< 1 Mio L 16 128

< 2 Mio XL 32 128

< 3 Mio XL 64 128

< 4 Mio XXL 64 128 Physical servers required

4 Mio+ XXL 64 256 Physical servers required

9.4 IDC 3.1.1 SAS CPU sizing & scaling

Table 33 – IDC SAS - VMware sizing table for x86 LINUXHospital size

[exams/yr]

vCPU[#]

MEM[#]

#VM’s[#]

Remarks

< 75 XS 4 24 1 never less than 24GB

< 150 S 8 24 1 never less than 24GB

< 350 M 8-12 32 1

< 500 L 16 64 1

< 1 Mio L 32 64 2

< 2 Mio XL 32 64 2

< 3 Mio XL 32 128 3

< 4 Mio XXL 32 128 4

4 Mio+ XXL 32-48 128 4-… Scaling out to more than 4 SAS servers is probably not adding extra throughput, but allows for better concurrent peak load distribution; scale within the box.

Page 32: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 32/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

10 Storage Requirements

References: Enterprise Imaging - Storage Requirement (procurement) Specifications - LLID 53571746

(shortcut to LLID 27527477) Oracle Database Configurator tool – LLID 27517010 Storage Quote Request Form – LLID 40540724 HP Storage Positioning & Support Matrix – LLID 32898121 EMC Storage Positioning & Support Matrix – LLID 38241292 NetApp Storage positioning & Support matrix – LLID 38825474

10.1 Storage Tier specifications:

Traditionally, storage is categorized in different Storage Tiers. Technical specifications have been assigned to each of these storage tiers to match the functional requirements. Storage Tiering is based on levels of data protection, performance or capacity considerations, age of data, frequency of use and price.

The specific technical requirements are kept as generic as possible in order to allow storage suppliers to provide the best fit technical solution according to their portfolio.

The following storage tier definitions are currently used in the Order Generation / Order Fulfilment process:

Table 34 – Overview Storage Tier definitions

Tier Purpose Technology Specifications RAID level Remarks

Tier 1 Performance Tierfor databases, native OS, VM OS, …

SAN / DAS 15K Drives or SSD

FC / SAS / SSD

RAID-10

for redundancy

For Oracle and SQL Database, Storage for OS system disks (Windows, Solaris, LINUX, VM OS…); fastest possible drives are required for Tier 1;

Proprietary RAID configurations may be used –depending on vendor – and when zero performance impact compared to RAID-10.

Tier 2 Tier 2-a

Extreme Performance Tier

SSD for EI Incoming cache

for large sites

(rfr IOPS table)

RAID 5 Fastest possible storage required for Tier 2; For EI Incoming cache: SSD on R5 or RAID-DP (as per Storage vendors recommended practice)

Performance + Capacity Tier

for DICOM images,WADO/WEB cache etc.

DAS / SAN / NAS

>=10K drives

15K drives strongly recommended;

never 7.2 K drives

RAID-5

(RAID-DP for NetApp - zero performance impact)

Fast accessible Short Term Storage (STS); proprietary RAID may be used depending on vendor and when no performance impact.

Note: in some cases Tier 2 & 3 may be consolidated based on customer needs.

This storage Tier is also acceptable for VMware VMFS images for Small/Medium sites, and for full RMAN backups, rfr. to the notes in the Tier descriptions. RAID-6 is not the same as RAID-DP !

Tier 3 DICOM image cache, WEB cache

DAS / SAN /NAS

10K drivesrecommended

RAID-5

(RAID-DP for NetApp zero performance impact)

Fast Long Term Storage (LTS); proprietary RAID-DP may be used and when no performance impact.

Note: may be consolidated with Tier 2 based on customer needs.

Tier–4 Tier-4aNear Line Archive

Tier(zip / blob files)

NAS, CAS, REST, …

SATA, NL-SAS

7.2 K Drives

RAID-6RAID-DP

(RAID-5 only when technology secure enough)

Long Term Archive spinning disks (LTA):

Multiple volumes, NFSV3 mounted.

Note: Proprietary RAID levels (R6, DP) may be required depending on vendors’ technology and best practice.

Page 33: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 33/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Tier-4bOff-line Archive Tier

(blob files)

Media / libraries /tape

HSM managed Long Term Archive – tape media (LTA);

For small EI Core Server based departmental archive volume only. Not for EI VNA, not for IDC.

Tier-4cCLOUD archive

Long Term Archive in the cloud

EI VNA 8.0 (IDC) and EI 8.1 VNA only.

10.2 Tier -1 IDC Database storage

IDC Database should reside on storage Tier-1: fast Block IO storage, RAID-10 15 Krpm SAS/FC or better – SS Oracle database partition design: to consider (for architecture team):

o High performance partitioning: traditional 4 partition design vs. 7 partition:o traditional 4 partition design for small/medium sites:

/data, /index, /redoa, /redobo 7 partition design for XL sites:

/data, /index, /redoa and /redob, /flashback, /rbs, /system, /undo /data and index filesystems should not be bigger than 1 TB; add multiple data partitions to

cover for larger DB sizes i.e /dbase/data1, /dbase/data2, etc. …)

Use the Database Calculator for capacity planning of the database size. The calculator can be found in LL 27517010.

Regardless the functionality and production volume, the database can never be smaller than 80 GB

(16 GB data files).

10.3 Tier-2 On-Line cache storage

The online DICOM image caches on the IDC SAS server should reside on storage Tier-2.

The total on-line cache volume of the VNA is split up in at least two or more "smaller" On-Line cache volumes.

Recommended specifications:

Maximum size is 750 GB per volume, recommended 500GB for high performance Need minimum two – or more – volumes. Minimum 3 weeks’ worth of total Near-Line volume cache, preferably more The total number of On-Line caches is not limited: IDC / EI VNA can handle in principle

an unlimited number of On-Line volumes; the capabilities of the Operating System will dictate what is feasible.

10.4 Tier-3 storage

N.A. for VNA/IDC

Page 34: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 34/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

10.5 Tier-4 Near-Line archive

The near-line archive volumes for IDC are hosted on storage Tier-4:

Depending on the capabilities of the storage environment, the total Nearline volume of the VNA is split up in multiple smaller online cache volumes (default).

Multiple Nearline archive volumes are recommended for correct performance

Smaller amount of very large volumes is not recommended :o performance impact (OS can handle the IO more efficiently when multiple smaller

volumes)o very large volumes are difficult to manage (how about technology refreshes in the

future, copying/moving large amounts of data , …etc.)o Single very large volumes (sometimes even unlimited in size in case of scalable

NAS concepts) are only advisable for small to very small systems.o HSM in combination with QStar is not supported for IDC (negative field

experience).

Page 35: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 35/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

11 Networking Requirements

Reference: Network Design Document for Enterprise Imaging 8.x – LLID 49659008 Network reference deployments PPT – LLID 50073402 Above documents apply as well to IDC 3.1.1

11.1 Topology

The reference network topology for IDC 3.1.1 as well as for Enterprise Imaging looks as follows, consists of the following networks of vLAN’s:

Figure 4 - EI reference Network design

Page 36: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 36/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Figure 5 - Server reference network design

11.1.1Front-end network:

Hospital PRODUCTION network: for clients, modalities, end-users, departments… connecting to the application components.

11.1.2Back-end networks:

Internal APPLICATION Network (optional, mainly in an ICIS setup): for communication between Enterprise Imaging application components that does not require going over the hospital PRODUCTION network.

Internal STORAGE network: dedicated storage network if application relies on CIFS or NFSV3 storage shares

Optional: MANAGEMENT networkthe physical hardware (or VM ESX server) will also be connected to the ADMINISTRATION network for IT administration purposes (not required for the application).

11.2 Hypervisor network layout

Details are defined in the Network Design for Enterprise Imaging 8.x – LLID 49659008.

The default lay-out is described in table below. Refer also to the Enterprise Imaging 8.0.x Network Design Document – LiveLink NodeID 49659008 for additional info on the network requirements.

11.3 Load Balancer

Reference: The reference AGFA HE Hardware Load Balancer is BigIP-F5:

refer to the EI HW LB PDD – LLID 55633522.

A load balancer is optional if only one SAS has been deployed. It must be used when more than one Storage Application Server is deployed as part of the same cluster and when deployed in an HA/DR layout such as Active/Standby, Active/Passive, etc.

The BigIP LB can be replaced by the customers Load Balancer. In that case, the same specifications should be applied and translated to the customer specific HW LB platform.

Page 37: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 37/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

Load balancers should provide: an HA Pair with global switching option; Support for load balancing algorithms: (Round Robin, Least connections, load aware,

QOS...; round Robin used as default. session persistence options supports one-armed SNAT (automap) load balancing method (or DSR Direct server return) Healthcheck that can verify server alive status as well as DICOM service available FIPS compliant (if possible) optionally supports SSL offloading (can be done on server also) TCP stack tuning on a vip basis (nagle algorithm, Send receive buffer size, Proxy or pass-

through) reporting features (Throughput, Connections used ...) all services (DICOM 104, APP 80/443, DMWL 3320, SAS/8080) are active on load

balancer on DC1 Possibility to group several production NICs into LACP, in order to have redundancy on nic

level. Example: have two 1Gbit uplinks aggregated in one production trunk (+ another pair for the interconnect trunk)

11.4 Multicast addressing:

Some Enterprise Imaging components make use of an IP-v4 multi-cast IP address range: these components are EI XERO Viewer, EI VNA and EI XERI Viewer 3D.

It is required /advised that sites implementing Enterprise Imaging components allow these default multi-cast addresses in their environment on the Enterprise Imaging Application network.In case of conflicts, the multi-cast address range used by the different components can be changed. In order not to cause any other conflicts, the address must be selected out of the free address ranges, from the Administratively Scoped IP Multicast – Local Scope addresses, as described in following references:

References: IPv4 Multicast Address Space Registry:http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml

IPv4 Multicast Address Assignments: http://tools.ietf.org/html/rfc5771

Administratively Scoped IP Multicast : http://tools.ietf.org/html/rfc2365

Page 38: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 38/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

11.5 Physical Network layout Example

11.5.1HP DL385p Server

Figure 6 - HP physical network connection schema example

11.5.2Oracle SPARC server

Source: Oracle

Figure 7 - Oracle SPARC T7 physical network connection schema example

Page 39: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 39/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

12 3rd Party Software Requirements

12.1 OS version

Operating System Golden LINUX reference version.Refer to the OS versions section 4.1 - LINUX

Remote Access Putty, ssh

12.2 Oracle SPARC/Solaris OS version – DB servers only

Operating System Solaris 10 (64 bit)(only for existing IDC 3.0 customer instances running on Solaris)

Remote Access Putty, ssh

12.3 Database software

Database Software For VNA DB and VNA DB ARR:

Default: Oracle 11gR2 (11.2.0.4) – Standard Edition One (SE1) – 64 bit.

Supported:Oracle 11gR2 (11.2.0.4) – 64 bitStandard Edition (SE) and Enterprise Edition (EE)

Table 35 - Supported Oracle Database versions for EI VNA 8.0 and IDC 3.1.1EI

versiondatabase version edition license

modelcomments

VNA 8.0(IDC 3.1)

default

11g– 64bit 11.2.0.4

SE1 ESL (7)

supported SE ESL (7)

supported EE ESL (7) when ODG required

supported EE PAH (2) in hosted environments

IDC 3.1.1upgrades

11g– 64bit 11.2.0.4 (3)(6) Keep same version and edition at upgrade

new license required when Edition/version changes

IDC 3.1.1new installs

Default 11g– 64bit 11.2.0.4 (3)

SE1(1) ESL/PAH (7)

NOT supported!

11g– 64bit 11.2.0.4 (3) SE1(1) ASFU EE only; SE or SE1 has ceased to exist as of 12.1.0.2

12c– 64bit 12.1.0.2 EE (1) ASFU, ESL(7)

PAH(2)

12c– 64bit 12.1.0.2 SE2 (4) ASFUESL (7),PAH (2)

Rfr. Contractual note (5)

12.c -64bit 12.1.0.1 EE/SE/SE1

Oracle 12.1.0.1 is explicitly not supported for IDC 3.1.1

Notes:(1) For now, due to contractual limitations(2) PAH in hosted environments or when multiple legal entities connecting to the same DB

Page 40: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 40/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

(3) Oracle 11 will be end of service life by end 2015. For future versions, Oracle 12 will be the reference –starting in 2016. However: the service model for the Embedded System Licensing (ESL) is not impacted by the EoSL for 11.2.0.4

(4) Oracle 12.1.0.1 is the last version supporting the SE1 license model. As of 12.1.0.2, SE1 and SE are no longer valid license editions, and will be replaced by SE2. Note that 12.1.0.1 is explicitly not supported.

(5) Contractual note: the old AGFA/ Oracle buyout agreement (ESL, PAH, ASFU) includes the major DB versions 11g and 12c, and covers the editions SE1, SE and EE. The new buyout agreement (May 2016) covers 12c, SE2 and ASFU licenses.

(6) Upgrading from 11g to 12c is possible/contractually allowed, but requires a new license.(7) In the new contractual setup, integration into customer owned Oracle DB licensing is allowed.

12.4 Application Middleware

Middleware

JBoss VNA DB, VNA SAS and VNA ARR server components: JBoss EAP v 4.3 CP 10

VNA ILM Server: JBoss 4

Java Java 6

12.5 Other

Other 3rd Party software

For other packaged software, Refer to the External Software Components Matrix (ESC) – captured for now in the DHFI file – DHFItab at Livelink Node ID 53784909 (or in the CTF tracker indicated in the ESC reference)

Page 41: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 41/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

13 References

Related Platform Definition Name Node ID / Link

IDC 3.1.1 DHFI & Transfer Readiness Record LLID 53784909Enterprise Imaging Software Architecture Specification (SAS) LLID 14399480Consolidated requirements – Enterprise Imaging VNA 8.0.x LLID 42481958Enterprise Imaging VNA 8.0.0 Software Architecture Specification(no plans to update to rebrand to “IDC 3.1.1”)

LLID 49823011

IDC Information Lifecycle Management (ILM) Software Architecture spec

(applicable for Enterprise Imaging 8.0.x)

LLID 42559471

HP DL 385p Gen8 Platform Design Document LLID 37857141HP DL 385p Gen8 6344 - Server BoM. LLID 42874896

Page 42: PlatformDefinition IMPAX Data Center (IDC)3.1

Document Status: Approved Livelink ID: 55694328 Version: 5

Platform Definition 42/42 Livelink Node ID: 55694328AGFA INTERNAL DOCUMENT

Printed copies are not controlled and shall be verified on the electronic document management systemTemplate 46865842, v.4 (template revision history at 39703795)

14 Revision History

Table 36 – EI 8.0 VNA PDD revision history - LLID 49744949 - DISCONTINUED

Version Date Author Description

1 2015-05-05 DS First draft, based on IDC 3.x

2 2015-05-11 DS

3 2015-08-26 DS Team review update

5 2015-09-22 DS Version for approval

6 2016-05-12 DS Updates and review workflow to include IDC 3.1.1

7 2016-07-12 LWC Updates throughout per the review workflow in-line and discussion comments, and per working session with PD&V today.

In summary, changes include: Introduce OEL (Linux) 6.7 to replace OEL 6.3 Re-introduce support for Solaris 10 on the IDC DB (supported

upgrade paths only with hardware restrictions) Re-add required IDC drives/partitions (as noted in the VNA

8.0.0 PDD) Adjust memory based on SAS/DB/ILM requirements (as

noted in the VNA 8.0.0 PDD) Remove 2 additional network cards. Remove Oracle 12 pre-reqs and added Oracle 11 pre-reqs Remove IPv6 support - JBoss EAP 4.3 does not support IPv6.

Table 37 – IDC 3.1.1 PDD revision history – LLID 55694328

Version Date Author Description

v1 2015-07-12 LWC Equivalent to version 7 of document at Livelink Node ID 49744949 in order to create IDC 3.1.1 version of the PDD per the review on the previous version of this document.

2015-07-27 LWC Minor updates to sections 3.5.3, 4.1.1, 4.1.3, and the table in section 5.3.2 per the previous review workflow.

V2 2016-09-14 DS Refresh of the document – based on above changes; uploaded to LL. reworked sizing tables based upon field feedback and performance escalations.

Version for review.

V3 2016-09-16 DS For review

V4 2016-09-21 DS Team review – last version for IDC 3.1.1 release approval.

V5 2016-09-22 DS For IDC 3.1.1 release approval.

Page 43: PlatformDefinition IMPAX Data Center (IDC)3.1

Details as of PDF Creation DateDocument Metadata

Title: IDC 3.1.1 Platform Definition Document

Livelink ID: 55694328

Version#: 5

Version Date: 2016-09-22 12:22 PM CET

Status: Approved on 2016-09-29 06:38 PM CET

Owner: Leyton Collins (axntd)

Created By: Leyton Collins (axntd)

Created Date: 2016-07-12 10:06 PM CET

PDF Creation Date: 2016-09-29 06:39 PM CET

This document was approved by:

Signatures:

1. Dirk Somers (amduv) on 2016-09-22 12:42 PM CET2. Stephen Mallory (amigg) on 2016-09-22 09:10 PM CET3. Sue Sing-Judge (amilr) on 2016-09-22 02:27 PM CET4. Charles Gieringer (amimr) on 2016-09-29 06:22 PM CET5. Hendrik Plaetinck (amnvx) on 2016-09-22 06:16 PM CET6. Peter Libbrecht (amnlj) on 2016-09-23 12:04 PM CET7. Leyton Collins (axntd) on 2016-09-22 03:27 PM CET

Detailed Approver History:

• Approval Workflow started on 2016-09-22 12:24 PM CET◦ Approval task originally assigned to and completed by Hendrik Plaetinck

(amnvx) on 2016-09-22 06:16 PM CET◦ Approval task originally assigned to and completed by Dirk Somers

(amduv) on 2016-09-22 12:42 PM CET◦ Approval task originally assigned to and completed by Leyton Collins

(axntd) on 2016-09-22 03:27 PM CET◦ Approval task originally assigned to and completed by Peter Libbrecht

(amnlj) on 2016-09-23 12:04 PM CET◦ Approval task originally assigned to and completed by Charles Gieringer

(amimr) on 2016-09-29 06:22 PM CET

Page 1 of 2

2016-09-29

Document Status: Approved Livelink ID: 55694328 Version: 5

Page 44: PlatformDefinition IMPAX Data Center (IDC)3.1

◦ Approval task originally assigned to and completed by Stephen Mallory (amigg) on 2016-09-22 09:10 PM CET

◦ Approval task originally assigned to and completed by Sue Sing-Judge (amilr) on 2016-09-22 02:27 PM CET

Version & Status History

Version# Date Created Status

5 2016-09-22 12:22 PM CET Approved - 2016-09-29

4 2016-09-21 12:04 PM CET

3 2016-09-16 12:42 PM CET Reviewed - 2016-09-21

2 2016-09-14 05:01 PM CET

1 2016-07-12 10:06 PM CET Reviewed - 2016-07-22

Page 2 of 2

2016-09-29

Document Status: Approved Livelink ID: 55694328 Version: 5