cto magazine volume1 issue3

67
April - JUNE 2013 Volume 01 Issue 03 www.ctoforumbd.org p. 46 p. 09 p. 42 p. 20 p. 12 eBanking - an Experience Enriched in Japan... State of Data Security in 21st Century Banking... The Changing Face of Data... How CIOs and CTOs should Improve their Businesses... 21st Century ICT Graduates: The Architects of Future...

Upload: cto-magazine

Post on 06-Apr-2016

248 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Cto magazine volume1 issue3

April - JUNE 2013 Volume 01 Issue 03 www.ctoforumbd.org

p. 46

p. 09

p. 42p. 20p. 12

eBanking - an Experience Enriched in Japan...

State of Data Security in 21st Century Banking...

The Changing Face of Data...

How CIOs and CTOs should Improve their Businesses...

21st Century ICT Graduates: The Architects of Future...

Page 2: Cto magazine volume1 issue3
Page 3: Cto magazine volume1 issue3
Page 4: Cto magazine volume1 issue3
Page 5: Cto magazine volume1 issue3
Page 6: Cto magazine volume1 issue3
Page 7: Cto magazine volume1 issue3
Page 8: Cto magazine volume1 issue3
Page 9: Cto magazine volume1 issue3
Page 10: Cto magazine volume1 issue3

April - June 2013 www.ctoforumbd.org6

SECURITY09 eBanking - an Experience Enriched in

Japan

12 State of Data Security in 21st Century Banking

LEADERSHIP

42 How CIOs and CTOs should Improve

their Businesses...

45 Six Leadership Qualities of a World-

Class CTO

INNOVATION20 The Changing Face of Data

22 Application Control: the key to a secure network

35 Challenges and Opportunities for the Banking Sector

36 Using RFID Technology for Digital Bangladesh

DIGITAL BANGLADESH

46 21st Century ICT Graduates: The Architects of Future Bangladesh

CONTENTVOL. 01, ISSUE. 03, APR - JUN 2013

Page 11: Cto magazine volume1 issue3

April - June 2013www.ctoforumbd.org 7

I have the pleasure of informing our esteemed readers that we have made an effort of publishing two issues of our CTO Magazine in quick succession, where we got support from CTOs/CIOs/CEOs from the local and international industry experts who have all along been supportive in contributing articles timely on relevant topics highlighting challenges facing the country on technology. We published and distributed over 2500 copies of each issue among the industry leaders/experts and decision/policy makers in the country. This magazine is specially targeted towards highlighting the country specific technology issues so that our CxO level positions find it relevant and useful in their day-to-day affairs and efficient decision making thus raising the level of contribution in ICT governance over time. There is no denying that this is the first of its kind in ICT industry of our country with such level of targeted readership among the industry leaders and decision makers. As a result, this humble initiative and endeavour of CTO Forum is appreciated and honoured by different sector leaders. Recognising this, it has been felt necessary to uphold the quality issues, to improve the quality of the magazine overtime, we have already taken initiative to reorganize our editorial board and formed an advisory board where we included renowned educationist of our country. However, we admit and understand that it is a continuous process and not to come to fruition overnight. We hope that our publication will progressively get better in the coming years hand in hand with the dynamic growth of technology perspectives and innovation which, as we know already encompasses a wide spectrum of subject/functional areas, and we shall continue to get support from the feedback of the readers in this regard and all others in their respective areas of proactive contribution.

We are very much thankful to Professor Dr. M. Lutfar Rahman, Vice Chancellor of Daffodil International Univ ersity, Professor M Omar Rahman, Vice Chancellor of Independent University, Bangladesh, Professor Dr. M. Kaykobad, Department of CSE, BUET and Professor Dr. Suraiya Pervin, Department of CSE, University of Dhaka for their kind support and cooperation.

MD. NAZMUL HOQUE

EDITORIAL

Chief Editor

Md. Nazmul Hoque

Advisory Board

Professor Dr. M. Lutfar Rahman Professor M Omar Rahman Professor Dr. M. Kaykobad Professor Dr. Suraiya Pervin

Editorial Board

Tapan Kanti Sarkar Nawed Iqbal Debdulal Roy Dr. Ijazul Haque Kanon Kumar Roy Professor Dr. Syed Akhter Hossain

E-mail us:Feedback:[email protected]

Visit us on the web:www.ctoforumbd.org

Contact Information:Office SecretaryCTO Forum Bangladesh12-F (12th Floor) Meherba Plaza33, Topkhana Road Dhaka – 1000Bangladeshemail: [email protected] Phone: +880-1818-525236

The articles available on this magazine are copyrighted and all rights are reserved by the CTO Forum Bangladesh and respective author. No part of this magazine may be reproduced or copied, stored in a retrieval system, or transmitted by any means electronic, mechanical, photocopying, recording or otherwise, without the prior written permission from the author. Breach of this condition is liable for appropriate legal action. Published and printed in Bangladesh by The CTO Forum Bangladesh.

APR-JUN 2013 n Vol. 01 Issue 03 n www.ctoforumbd.org

Page 12: Cto magazine volume1 issue3
Page 13: Cto magazine volume1 issue3

SECU

RITY

April - June 2013www.ctoforumbd.org 9

Financial services and information technology (IT) are the two industries, have been going through major changes worldwide since last decade. Internet based financial institutions are trying to capture clients left from traditional banks. The Internet, along with deregulation and globalization of financial systems, is creating new opportunities for financial institutions and also new innovations in banking products and services, all over the world and especially in Asia Pacific. Electronic Banking or eBanking is making changes in banking area at rapid pace due to these waves of global surfacing Information & Communication Technology. Consequently in most countries Internet banking is fast spreading. Throughout the globe now banks are offering services through WAP (Wireless Application Protocol). However, stiff competition in these areas exposes banks to substantial risks. So, the need is being felt overseas that transparency and disclosure requirements should be met by the e-banking community.

According to ITU (Information and Communication Technologies), as of June 2010, 78.2% of Japanese population is using Internet; so Banks in Japan are increasingly focusing on e-banking transactions with customers and Internet Banking being an important part of their strategy. In fact, the first cyberbank of Japan, the Japan Net Bank (JNB), was stared in October 2000 by the Consortium of Sakura Bank Fujitsu Limited, Nippon Life Insurance Company, and Sumitomo Bank. In June 2001, considering

various security and privacy issues, Sony Bank alone emerged as the second online bank in the country. The bank provides comprehensive personal banking services via the Internet, including foreign currency deposit accounts, credit cards and mortgages. Bank customers have access to 7600 ATMs owned by Sumitomo Mitsui Banking. Sony Bank offers Internet based software, called MoneyKit, which provides twenty-two financial tools to the customers (Kerr 2001). Account holders can check their account balances, verify checks paid, and obtain loan and investment advice. At present, JNB and Sony Bank are the two major stand-alone

Internet Banks in Japan, but they have two minor competitors: eBank and IY Bank

After the above, the full-blown Internet age arrived in Japan, prompting even conventional banks to replenish their Net business strategy. With the Internet evolving and spreading at a much faster pace than was envisaged when

the Big Bang was set in motion, perhaps Net banks are still in the early stages - and have their best days ahead of them. In the past 10 years, since Net banks first appeared in Japan, not one has reached its expected size or even established a solid profit base. The combined balance of deposits at online-only banks has been steadily increasing, yet it still makes up less than 1% of that of all banks.

Net banks are the product of financial deregulation, called “the financial Big Bang,” which the Japanese government has been pushing ahead since the second half of the 1990s. Since then, experts argued

eBanking - an Experience Enriched in JapanBy Tapan Kanti SarkarChief Technology Officer (CTO), NCC Bank Limited SE

CURI

TY

Page 14: Cto magazine volume1 issue3

SECU

RITY

April - June 2013 www.ctoforumbd.org10

that new entrants in the banking sector would replace existing banks saddled with bad debt and perk up the industry. Hopes were especially high that entrants from other industries would bring new

ideas and business models. In reality, Net banks have done a little to broaden market scope. In the beginning, consumers were concerned about the safety of transferring money via personal computers or mobile phones. So, it took the bank five years to turn into profit. The latest data underline- how much smaller the operational sizes of virtual banks are compared to those of traditional banks. The combined balances of deposits of five only Net banks were slightly above 4 trillion yen at the end of June, a miserable 0.7% or so of the aggregate deposit balance of all of Japan’s banks.

Putting too much effort in a service that requires a certain amount of manpower could undermine Net banks’ low-cost edge. Still, Sony Bank has established an in-the-flesh service to go head-to-head with conventional banks, which themselves have been injecting massive amounts of management resources into their housing loan businesses. In the housing loan market, however, the bank is exposed to increasingly harsh competition from traditional banks, including its parent, Sumitomo Trust & Banking Co.

Conventional banks struggle to draw and retain wealthy customers by offering preferential interest rates. A remark of a Japans regional bank executive is that “the current unimpressive interest rate

differential does not encourage many depositors to shift their money to Net banks.” So while virtual banks struggle with their unstable profit bases, they are also having difficulty setting substantially higher interest rates than traditional banks. In this environment, Sony Bank and SBI Sumishin Net Bank appear to become comprehensive banks that provide a wide range of financial products and services. However, if they indeed go down this road, their battle with conventional banks will grow even stiffer.

Now all industry eyes are on smart phones, with many online banks planning to launch services for smart phone users in the beginning of the next year. If they can successfully provide services and convenience that cannot be offered through ordinary mobile phones, the Net banks just might be able to adjust these coming services into larger customer bases.

The current regulations of the Bank of Japan on physical presence of bank branches are undergoing modifications to take care of licensing of banks and their branches with no physical presence. The Report of the Electronic Financial Services Study Group (EFSSG) has made recommendations regarding the supervision and regulation of electronic financial services. Financial institutions are required to take sufficient measures for risk management of service providers and the authorities are required to verify that such measures have been taken. Providing information about non-financial businesses on a bank web site is not a violation as long as it does

not constitute a business itself. With respect to consumer protection it is felt that guidance and not regulations should encourage voluntary efforts of individual institutions in this area. Protection of private information, however, is becoming a burning issue in Japan both within and outside the field of e-banking. Japanese

Page 15: Cto magazine volume1 issue3

SECU

RITY

April - June 2013www.ctoforumbd.org 11

banks are currently requested to place disclosure publications in their offices (branches) by the law. However, ‘Internet Only banks’ are finding it difficult to satisfy this requirement.

The report of the EFSSG recommends that financial service providers that operate transactional website should practice online disclosure through electronic means at the same time and should make equivalent paper based disclosure. They should also explain the risks and give customers a fair chance to ask queries.

The Government of Japan intends to introduce comprehensive Data Protection Legislation in the near future. There are no restrictions or requirements on the use of cryptography. The Ministry of International Trade and Industry (MITI)’s approval is required to report encryption technology.

Before the spread of Internet, Japanese Banks were trying to popular PC Banking which was targeting to go Small Office/Home Office or SOHO clients. Though around 200 banks were providing the service, a very few subscribers were attracted due to software problems and telephone charges, along with that the Japanese consumer behavior were responsible partly. In a country like Japan, in which 24-hour ATMs are just becoming widespread, there was no strong demand for 24-hour dialup PC Banking. So, unlike the USA, the software and technology companies had no strong desire to get into the act.

Accessing information through Internet is preferred by Japanese people due to strong growth of Internet in Japan. So far there are only 6 banks which are offering banking on the Internet, contrary to that about 200 banks in the USA are offering it. As security for web-based Internet transactions improve, Internet banking can be the de facto distribution channel for financial products and

services. Through Internet, a bank can literally reach each Internet user on the planet and the transactions cost is considerably low. Now, we can practically operate a worldwide bank without a single branch office. This translates to huge reduction of cost per transaction. Moreover, business through Internet is 24 hours, 7 days a week, from anywhere in the world and it is getting cheaper and cheaper day-by-day.

Author Details:

Tapan Kanti SarkarCTO, NCC Bank Limited.President, CTO Forum Bangladesh.Feedback: [email protected]

Page 16: Cto magazine volume1 issue3

SECU

RITY

April - June 2013 www.ctoforumbd.org12

Introduction

Information Technology (IT) revolution has ushered a paradigm shift in the banking industry. The model of banking has transformed from brick and mortar to all pervading through ‘Anywhere and Anytime Banking’. Though the fundamentals of banking might have remained the same, customers’ perception of ‘value’ and, therefore, ‘business models’ are evolving in an ever increasing velocity.

Today, if a Bank can assure its customer of a viable 24X7 interface, it has the hope of retaining the customer for longer time. IT has evolved and enabled the industry in many domains vis-à-vis customer service, enhanced product delivery, cross-sell, multi-channel real time transaction processing, minimal transaction costs, and increased operational efficiency, therefore, impacting the overall profitability & productivity in the sector. The fast evolving trends of technology in the sector have blurred the boundaries of information ecosystems to include service providers and customers. To illustrate, in an electronic card payment system, data is directly accessed and processed by customers; service providers as well as other partner institutions. While, this integrated environment has exponentially enhanced the service capability of banks and experience of customers, it has introduced a new gamut of risks. In the currently prevailing global economic conditions, organized threats are being increasingly perpetrated against financial institutions. In line with expectations, survey results indicate that banks are constantly being exposed to sophisticated, organized and financially motivated threats. Increasing targeting of customers through phishing, vishing, smishing attacks is also one of

the important elements of threat landscape. The unique aspect about information security in banking industry is that the security posture of a bank does not depend solely on the safeguards and practices implemented by the bank, it is equally dependent on the awareness of the users using the banking channel and the quality of end-user terminals.

This makes the task for protecting information confidentiality and integrity a greater challenge for the banking industry. An elaborate survey done by an independent organization reveals stunning facts about the current state of Information Security practices in Banks.

Highlights

• External threats and the increasing usage of online & mobile channels along with regulatory requirements are driving banks to invest in

information security.

• Banks drive inputs from international standards such as ISO 27001/2, PCI-DSS, SOX, GLBA etc. to establish their security function. However, there is a need to focus on proactive mechanisms such as threat modeling and bringing innovation in the security initiatives.

• Information security is still seen as an IT- centric function with reporting of the CISO to CTO/CIO of the bank.

• Absence of collaboration and synergy between Security and Fraud Management functions leaves a significant gap in banks’ effort to curb financial frauds. Customer awareness on information security along with insecure customer endpoints is one of the most significant challenges faced by the banks.

SECU

RITY

State of Data Security in 21st Century BankingWith global borders are increasingly becoming blur due to proliferation of IT, it’s time for our financial institution to redesign the whole security framework to address forthcoming challenges and grow the businessBy Koushik NathVP, Systems Engineering, India & SAARC, Cisco Systems

Page 17: Cto magazine volume1 issue3

SECU

RITY

April - June 2013www.ctoforumbd.org 13

• CISOs are still spending significant time on operational activities, making it difficult to focus on strategic initiatives.

• When executing security related responsibilities, the focus is still on arranging in-house resources except for few specialized areas like Application Security testing.

• Privacy has started to gain relevance with increased customer awareness but measures advocated for customers’ privacy protection are yet to be implemented by many banks.

• The adoption of measures that have been strongly advocated for transaction security such as One- Time-Password (dynamic token), identity grid and risk based authentication are still at nascent stages.

• Security of cards transaction is lagging - even basic measures for ensuring card security have not been adopted by many of the banks.

• Managing security is more challenging in online banking and phone (IVR) banking as compared to other service delivery channels.

• Majority of the Banks continue to remain largely dependent on incidents being reported by their customers and/or employees, highlighting the need for a real time, automated and intelligent incident management mechanism.

Data Security and Privacy

Data Security in Banks continues to be driven by External Threats and regulatory requirements whereas ‘Data Privacy’ is slowly beginning to gain relevance. Information Security is still seen as an IT centric function with minimal coordination with Fraud Management function. Lack of customer awareness on information security and the

threat from insecure customer end points are key challenges faced by the banks.

External threats and the increasing usage of online & mobile channels along with dependency on third parties are driving banks in to invest in information security. The focus of security initiatives seems to be concentrated on keeping continuous vigilance over security issues & vulnerabilities and review of the environment against the new age threats. However, banks need to provide more focus on proactive mechanisms such as threat modeling and bring innovation in the security initiatives that helps address evolving challenges. In light of the increasing sophistication of new age threats and rising complexity of the banking environment, some of the banks have started to collaborate with external and internal sources for information security.

Information security is predominantly a central function in banks. This reflects the ongoing consolidation in the banking infrastructure and adoption of core banking solutions. It has been observed that the involvement of business functions through their representatives for coordination of

Page 18: Cto magazine volume1 issue3

SECU

RITY

April - June 2013 www.ctoforumbd.org14

security in their respective units seems to be lacking. It is interesting to note that the information security has no or minimal role in fraud management. The silo in the security and fraud management role

would lead to a significant gap in banks effort to curb financial frauds as security compromises are seen as a tool for committing financial frauds. Information security is still seen as an IT centric function, as CISOs reporting to CIO/CTO whereas global trend says CISOs should get a respective position in organization’s hierarchy and report to CEO/COO/MD.

Data privacy is slowly beginning to gain relevance in third world countries. Customers are becoming aware and increasingly conscious of their rights and the banks’ obligations towards personal information protection. Banks must align internal policies, procedures and deploy technology safeguards for protecting sensitive personal information. They need to implement specific measures such as formulation of privacy policies, privacy impact assessments and embedding of data privacy in the business processes to make a practical sense of it. Measures such as privacy policy on corporate website, link of privacy policy on user data forms, disclosure of information to third parties and privacy policy notice to the customers need to be adopted for an all-round secure banking.

Major challenges faced by Banks

Information Security in banking has assumed significant importance and the top management of banks are fully committed to providing support as the trend reflects. However, with increasing omnipresence of banking services and endeavor to enhance customer experience undermines the security posture until the processes are fully matured.

One of the most significant information security challenges highlighted by the banks is, “lack of customer awareness on information security and the threat from insecure customer end-points”. The boundary-less cyber space exposes the banks to internationally organized crimes and new age threats. Further, with banks increasingly working with third parties and in the process, sharing business information, management of third party risks is also becoming a challenging task.

Security Governance

The age old adage “Security is everyone’s responsibility” is beginning to get realized in the banking sector in some part of this geo. While most of the information security responsibilities lie with the dedicated information security teams of the banks, business users, compliance, and audit teams are important contributors. Division of the work between IT Infrastructure, IT Security and CISO need to be properly aligned to their responsibilities to yield better results.

Apart from Business Continuity and Disaster Recovery Planning, the IT Infrastructure, Security and CISO team needs be involved in business security initiatives especially defining security requirements of their business and security strategy planning. It has also been observed that the banks do not seem too keen on availing the services of external consultants and service providers except for specialized services such as application security testing, gap assessment, VA/PT and security policy formulation.

There seems to be lack of clarity on CISOs roles and responsibilities. CISOs are spending their time across strategic and operational activities, which may lead to challenge in time availability of them. This may pose a challenge to CISOs in effectively

Page 19: Cto magazine volume1 issue3

SECU

RITY

April - June 2013www.ctoforumbd.org 15

utilizing their time. Ideally, CISO should be a business leader which engages itself into communicative, collaborative and integrative activities rather than operational tasks.

Security in Service Delivery

The fact is, banks have recognized that customer awareness on security issues is not only a hygiene factor but also a key pillar of information security. As expected, it has been noticed that basic hygiene factors such as enforcement of password policy, password change at first login, account lockout and session timeout have been implemented across all banks for endcustomer applications. Interestingly, some banks are beginning to adopt security measures such as captcha implementation for login.

Some of the customer centric security initiatives:

• Enforce Password policy

• Password change at first login is mandated

• Account locking after unsuccessful attempts

• Session timeout after stipulated time

• Use strong SSL certificate

• Strong logout process (e.g. closing browser window to delete the cache)

• System generated Unique ID for account access

• Password expiry after stipulated time is implemented

• Password hashed while sending the HTTP request

• Password policy is guiding in nature

• User selected ID for account access

• Active X control is required to be installed on the customer machine

• External application like JRE (Java Run Time) required to be installed on customer machine

• Captcha implementation while login

Against the backdrop of increased focus of external threats to compromise the security of banking transactions, it is interesting to note of security measures implemented by banks for some of the key banking transactions. While measures such as SMS alert, separate transaction password, virtual keyboard seem to be more popular, adoption of the strongly advocated measures such as One-Time-Password (dynamic token), identity grid and risk based authentication are still at a nascent stage. Banks need to publish information related to Do’s and Don’ts for secure transactions on their websites. It is encouraging to note that a number of banks have begun to use public media and forums for spreading awareness and this may be a direction which other banks shall be following.

Card Security Initiatives

Across the industry, banks are lagging in security of cards transaction. Against the backdrop of well-known global cases of card breaches, it is surprising to note that the basic measures for ensuring card security have not been adopted by many of the banks. The practices such as storing and printing authorization information like CVV and expiry date, merchants creating plain text card records, non-masking of card number (PAN) followed by banks are nonconformant to globally accepted practices for card security like PCI-DSS.

Some of the card security best practices adopted by banks:

• CVV2/CID/and PIN never gets stored/printed at merchant side

• Educate and aware customers, merchants and employees on the importance of card security

• Use of secure protocol to transmit/receive card information

• Do not print card numbers on hard copies without a valid business need such as reconciliation. Hard copies are physically secured.

• Regular vulnerability assessment of the infrastructure that stores and transmits card data

• The stored card authorization information is encrypted

Page 20: Cto magazine volume1 issue3

SECU

RITY

April - June 2013 www.ctoforumbd.org16

• Do not store the card data in log files in plain text

• Monitor the card transactions

• Masking the card number (PAN) in all user communication and transaction Notification

• Encrypting the stored card information: File encryption for encrypting card information stored in files

• Card expiry date is not printed and stored at the merchant side

• Deploy PCI-DSS standards

• The POS at merchants do not create the card records in plain text

• The scope of card security is extended to the designated merchants

Security issues with Payment GatewaysThe main issue concerned with Payment Gateway is Security i.e. encrypting the crucial and sensitive card details like card numbers of the customer during card transaction.

The survey reveals that most of the respondent banks have implemented steps to ensure security of payment gateway application programming interface and communication channel through use of appropriate security protocols. Banks also encrypt card number and other card confidential information during storage and transit. Banks conduct periodic security testing of underlying payment infrastructure.

Best practices on Securing Payment Gateways:

• Ensure communication channel security through secure protocol

• Encryption of card information during transmission and storage

• Security is ensured in the Payment Gateway API

• No storage of authorization information: CVV2 value/PIN

• Regular security testing of the underlying infrastructure is performed

• Enforce input validation for user data entries

• Sensitive data captured in the variables for authorization is not stored by the entities that are involved in the transaction

• Assuring message integrity during transit

• Web services that facilitate execution of the transactions are tested for known security flaws

The fact is, amongst all the service delivery channels used by banks, online banking is still considered the most challenging in terms of managing security. Interestingly, phone (IVR) banking is also considered a difficult to manage service channel from security perspective. Channels such as TV (DTH) and online chat are being scarcely used by the banks.

Mobile based channels are primarily being currently used to provide information and consequently it’s not considered to be difficult to manage. However, with increased mCommerce transactions expected, there may be increased security challenges for mobile-based channels.

Internal Process AlignmentSurvey reveals that some banks need to create more robust processes to manage data security and privacy related threats. Majority of the banks still use traditional method of risk based internal or external audits for keeping track of threats & vulnerabilities.

Survey also reveals that while most banks have implemented backup data centers, usage of mature practices such as Run Book automation are still at nascent stages of adoption.

One of the major problems is discovering and identifying critical data elements within the organization and follow Information classification practice rigorously. There is also an added stress on involvement of process owners and lines of business in the data security initiatives. Hence, there is a need for increased emphasis on standardization and strengthening of the organizations processes with respect to data handling.

As external threats continue to be a key driver for the security initiatives of banking industry, banks needs to follow matured technologies to manage their threat and vulnerability management practices.

However, heterogeneous IT infrastructure and challenges in integrating threat and vulnerability management processes with IT infrastructure management processes are still seen as a hurdle.

Page 21: Cto magazine volume1 issue3

SECU

RITY

April - June 2013www.ctoforumbd.org 17

There are various methods for tracking evolving threats and vulnerabilities-

• Risk based internal or external audits

• Subscribing to CERT -In alerts

• Through websites of data security vendors

• Security research reports of product and professional organizations

• Through peers/competitors

• Subscribing to vulnerability, exploits databases, etc

• Subscribing to newsletters

• Mandating the vendors/third parties to report new threats and vulnerabilities in their products/services

• Through discussions on security forums on the internet

• Subscribing to Analysts reports

The banks keep vigilant track of new issues, vulnerability and threats by collaborating with agencies and other knowledge sources such as the website of security vendors, subscribing to vulnerability & exploits database, research reports, newsletters and analyst reports. However, the majority of the banks still use traditional method of risk based internal or external audits for keeping track of threats & vulnerabilities. Also, banks are increasingly adopting methods such as discussions on security forums and information through peers/ competition.

New Age ThreatsIn the currently prevailing global economic conditions, organized threats are being increasingly perpetrated against financial Institutions. In line with expectations, survey results indicate that banks are constantly being exposed to sophisticated, organized and financially motivated threats. Increasing targeting of customers through phishing, vishing, smishing attacks is also one of the important elements of threat landscape. With the emergence of mobile banking, banks are also concerned about their interfaces with mobile applications.

As the control requirements for information security spread beyond the boundaries of the banks and newer threats emerge, it will be imperative for bankers to use threat modeling techniques and deploy effective responses.

Some of the new age threats are listed below:

• Malware based attacks such as Zeus Malware that raids business accounts

• Man in the Browser (MITB) – Trojans in browser that modify user transactions

• Web is a channel for phishing attack

• Botnet command and control targeting

• Cross channel and multilayered fraud that uses multiple channels to perpetrate an attack

• Man in the Middle (MITM) that modifies customer generated transactions

• Unsecured APIs in mobile banking

• Phishing through SMS

There seems a shift of security attacks towards application layer requiring a holistic approach towards application security. On being asked about the state of application security measures, more than half of the banks indicated that they had formulated a dedicated application security function. Banks have started to use Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) tools.

Due to the requirement of specialized skills for conducting blackbox and greybox testing, banks are increasingly availing these services from external service providers. However, enterprise-wide focus on application security, which has been globally

Page 22: Cto magazine volume1 issue3

SECU

RITY

April - June 2013 www.ctoforumbd.org18

adopted and enabled by enterprise tools to integrate application security in life cycle processes, has not gained significant attention of banks. Moreover, the involvement of developer community

in application security is lagging. An organizations’ application portfolio is characterized by the externally provisioned applications, third party applications and packaged applications along with in-house applications. The banks seem to be managing the security of their application portfolio adequately except for externally provisioned applications.

Security Incident and Fraud Management

There seems to be a need for developing intelligence in incident management mechanisms as many of the banks do not have in place measures like ‘real time monitoring mechanisms that can proactively detect anomalies’, ‘incident generation based on patterns and business rules’ and ‘integration with organization IT processes for remedial actions’. Banks continue to remain largely dependent on incidents being reported by their customers and/or employees.

Business Continuity and Disaster Recovery Program

Its an irony that two-third of the banks have Business Continuity Management as part of their information Security Framework. Most of the banks have line of businesses involved in BCM planning and operations. Some banks have a mature BC/DR planning process in place wherein the scope of BCP/DRP covers strategies for business processes and recovery objectives of each business process. The scope of the BCP/DRP for most organizations extends to all externalities: network service provider, partners, vendors, and technical support.

The 24X7 operations of banks and concentration of technology and processes have significantly increased the need for business continuity/ disaster recovery capabilities. While many banks have implemented backup data centers, usage of mature practices such as Run Book automation, Tools for Business Continuity Planning, IT service failover, Emergency Notification system are still at nascent stages of adoption.

Physical Security of Information Processing Areas

Today banks have physical security integrated with IT security. Banks do agree that risk of data leakage increases with physical access to the operational

facility. Therefore, organizations have established strong physical security controls for perimeter, entry points and interior areas along with mechanisms for identification & authorization of employees. Banks also aim for significant level of collaboration between physical security, information security and other functions. However, the centralized monitoring of physical security seems absent in most of the banks.

Regulatory RequirementsIt is evident that banks are working towards creating awareness among senior management, employees and board members, very few are creating awareness amongst contractors and third parties. There is a clear need that banks internalize the regulations by updating their policies, reviewing vendor contracts and implementing measures to strengthen monitoring and incident management. However, the trend should be to prioritize development of strong forensic capabilities that support data breach investigations.

Most of the Banks use traditional risk mitigation techniques for third party vendors, such as engaging into contracts and non-disclosure agreements. However, banks must also adopt and implement proactive mechanisms like third party risk assessment framework, which can assist in continuous monitoring of the risk exposure. Banks are strengthening / planning to strengthen their security incident and event monitoring by implementation of solutions to address the same.

Some of the other solutions adopted / planning to adopt are to address privileged access management, network access control, WAN data encryption, database activity monitoring and fraud management. However, adoption of solutions to address key areas such as data loss prevention, hard disk encryption, email encryption and mobile security has to be improved.

Way ForwardBanks have strategically adopted new technologies to deliver better customer services, cut costs and gain competitive advantage. While the benefits of technology adoption are visible across the public and private sector banks, the technology risks emerging from these technologies have also grabbed attention in the recent years.

Although external threats have remained a key driver for banking security, the adoption of IS Governance, Data Security Standard and support from central

Page 23: Cto magazine volume1 issue3

SECU

RITY

April - June 2013www.ctoforumbd.org 19

authority has also contributed to strengthening the security culture. Worldwide banking industry is responding to the contemporary security challenges through a formal security function that derives inspiration from leading security standards for overseeing security initiatives in the banks.

Along with aligning the security initiatives to these leading security standards, banks need to invest their energies on providing architectural treatment to security, continuously assessing their exposure to threats through exercises such as threat modeling, and imbibing the practice of ‘security in design.’ This will bring a structured approach in their defense strategies and programs for efficiently & effectively mitigating the real threats by ensuring that security is considered right from the design phase of any product or service.

Though the security initiatives in banks are primarily driven by a centralized security function, the responsibility of security is fairly distributed among the different functions, realizing the old adage of ‘security is every body’s responsibility’. The focus is still on arranging in-house resources except for few specialized services like Application Security testing. There is a significant scope for banks to further outsource these services, leveraging the expertise of external service providers and consultants.

The primary motivation behind the new age attacks is to make financial gains and therefore the focus of these attacks is on obtaining sensitive information like login ids ,transaction passwords, credit card information, etc.

This necessitates the banks to take a data-centric approach when designing and implementing their security and privacy initiatives and build synergies between their fraud management and information security functions. Also, against the backdrop of increasing card related frauds, banks need to make investments in improving security of card transactions.

The banking customers continue to be the ‘soft target’ of the new age attacks. Lack of customer awareness, insecure customer endpoints and their likely impact on security of banking systems remain as important areas of concern. To address these concerns, efforts by individual banks alone may not prove to be sufficient. The entire banking industry, with guidance from the Central Bank, needs to collaborate for enhancing the security awareness of banking customers.

On the other hand, banks need to enhance their maturity in the area of customer centric security. While basic measures for transaction security have been adopted, very few of them have implemented new generation authentication solutions like dynamic token, identity grid and risk based authentication.

With increased digitization of customer information and increased levels of customer awareness, privacy has emerged as an important focus area for the banks. However, privacy is yet to be factored in the banking ecosystem. In response to these developments, banks need to undertake a comprehensive privacy program that ensures protection of their customers’ information throughout its lifecycle.

Author Details:

Koushik NathVP, Systems Engineering, India & SAARCCisco SystemsEmail: [email protected]

Page 24: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org20

In a world where ABC (Analytics, Big Data, Cloud) are converging, the need to manage, monitor and predict business outcomes and market readiness is also changing. And with the wide prevalence of tablets, smart phones and social media, the urgency to mine intelligence from the tons of raw data is prodding more companies and governments to beef up their big data plans.

Simply put, BD (big data) is the amalgamation of structured data (in databases) with unstructured data (from email, Facebook posts, Twitter feeds, LinkedIn updates, video feeds, etc). After a few years of experimentation and early adopter successes, 2013 will be the year of larger scale adoption of BD technologies, says research house Gartner. According to a Gartner global survey of IT leaders, 42% of respondents stated they had invested in BD technology, or were planning to do so within a year.

“Organizations have increased their understanding of what BD is and how it could transform the business in novel ways,” says Doug Laney, a Gartner research vice president. “The new key questions have shifted to ‘What are the strategies and skills required?’ and ‘How can we measure and ensure our return on investment?’ Most organizations are still in the early stages, and few have thought through an enterprise approach or realized the profound impact that BD will have on their infrastructure, organizations and industries.”

Meanwhile, IDC (International Data Corp) projects that the global BD technology and services market will grow at a 31.7% CAGR (compound annual growth rate) – about 700% more than the rate of the overall ICT market – with revenues reaching US$23.8 billion in 2016. IDC expects BD spending in the Asia-Pacific outside Japan to

reach US$603 million in 2013, up 42.6% over 2012. The market for BD will see a CAGR (compound annual growth rate) of 44% from now to 2015, IDC says.

Gartner notes that enterprises are taking BD initiatives in a rapidly shifting technological landscape with disruptive forces that produce and demand new data types and new kinds of information processing. They turn to BD technology for two reasons: necessity and conviction. Companies are becoming aware that BD initiatives are critical because they have identified obvious or potential business opportunities that can’t be met with traditional data sources, technologies or practices. Media hype is often backed with rousing use cases.

“This makes IT and business leaders worry that they are behind competitors in launching their BD initiatives,” notes Frank Buytendijk, a Gartner research vice president. “Not to worry, ideas and opportunities at this time are boundless, and some of the biggest BD ideas come from adopting and adapting ideas from other industries. Still, this makes it challenging to cut through the hype when evaluating big data technologies, approaches and project alternatives.”

INNO

VATI

ONThe Changing Face of DataThe convergence of technologies, devices and apps is prodding more companies to take big data seriouslyBy Raju Chellam Head: Cloud & Big Data Practice, Dell, South Asia & Korea

Page 25: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 21

IDC says the BD market is emerging at a rapid pace and incorporating technology and services from a wide range of existing and new market segments. While there are multiple scenarios that could unfold and many demand and supply variables remain in flux, the market will exhibit strong growth through 2016.

“The BD technology and services market represents a fast-growing, multibillion-dollar worldwide opportunity,” says Dan Vesset, IDC’s vice president for BD research. “It is an important topic on many executive agendas and presents some of the most attractive job opportunities for people with the right technology, analytics, communication, and industry domain expertise.”

One key to BD is cloud computing, which has already gone beyond the hype cycle. Forrester Research estimates the global cloud market to reach US$241 billion by 2020, from under half that level currently. More than 50% of the cloud market by 2020 will comprise hybrid clouds. That’s because enterprises such as banks, insurance companies and government departments seek full control of their sensitive, mission-critical apps and data but are willing to keep all customer-fronting information on a public cloud.

The final piece in this equation is mobile devices. Global sales of cellphones topped 428 million units in Q3 2012; smartphone sales accounting for 40% of that, says Gartner. Smartphone sales were up 47% Q3 2011, and the momentum continues.

In emerging countries worldwide, especially in Asia, the smartphone is the primary device for Internet browsing and soon, shopping. This convergence of technologies, devices and apps is prodding more companies to take BD seriously. The changing face of data will change not just the way the IT industry works, but also the way end-users and enterprises view IT. Gartner says actionable analytics will be driven by mobile, social and BD forces in 2013 and beyond. We’re just about getting started. Watch this space.

Raju Chellam is a Singapore-based cloud & big data expert. He has 30 years’ of experience in the ICT industry in Asia. He is Honorable President of the Business Continuity Group at the Singapore Computer Society; Executive Member of the National Cloud Advisory Panel, Honorable Secretary of the Cloud Chapter of the Singapore IT Federation & South Asia Head of Cloud & Big Data at Dell.

Author Details:

Raju ChellamHead: Cloud & Big Data PracticeDell, South Asia & KoreaEmail: [email protected]

Page 26: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org22

Application Control: the key to a secure networkBy Andrey Efremov & Vladimir Zapolyansky

Introduction

Corporate network security is one of the most pressing issues for companies today. Malicious programs can cause substantial harm to a business, and a firm‘s reputation is not the least of its worries. Companies specializing in IT security offer a variety of costly solutions, although in many cases, installing these solutions can actually lead to a steep increase in network maintenance costs. However, traditional security solutions cannot always provide guaranteed protection against unknown threats — especially when it comes to targeted attacks.

This article addresses an alternative approach to protecting corporate networks: the Whitelist Security Approach.

This approach continues the developments in launch control technologies (otherwise known as Application Control) which add Default Deny mode support and innovative new whitelisting technologies, or Dynamic Whitelist.

At Kaspersky Lab, we believe the Whitelist Security Approach is one of the key elements of protection

for corporate networks of the future. Products that use this approach are capable not only of protecting a network against unknown threats but also offer network sysadmins and IT security engineers progressive means of accounting and managing software, including unwanted and unlicensed outside (i.e., non-work- related) programs.

What you need to know before searching for alternative security approaches

The amount of new software grows steadily each year. In order to provide users with quality protection, antivirus companies must quickly analyze the enormous inflow of information and process terabytes of data on a daily basis, classifying tens of millions of files. In terms of classification, software can basically be divided into three main categories: known malware, known to be safe, and status unknown.

Software that is not clearly classified as safe or a threat by an antivirus company is placed in the “status unknown” category.

Some unknown software contains malicious code, and these types of programs are the most dangerous

for users, as well as the most challenging for antivirus products to detect. It is these programs that post the greatest threat, since virus writers continue to hone their skills as more and more new malicious programs are produced.

In most cases, antivirus companies must remain on the frontlines of this ongoing battle: each time virus writers develop a new malicious technology, antivirus companies respond by developing a new form of protection against it. As well as traditional signature-based technologies antivirus companies currently employ an entire arsenal of modern security technologies to boost security. These include proactive heuristic methods (both statistics-based and dynamic) and cloud technologies

INNO

VATI

ON

Page 27: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 23

which respond instantly to new threats and also expand standard out-of-the-box security capabilities that were previously unavailable within the online service infrastructure. The traditional approach to security involves blocking known threats, including known malicious behavior templates. However, the stories behind the Stuxnet, Duqu, and Flame Trojans all show that the traditional means of protection available on the market is more or less powerless against some new threats and targeted attacks. As a result, corporate network security faces ever-growing demands.

Given today‘s situation, software developers working in IT security are faced with the task of finding alternative solutions capable of substantially boosting the security of corporate networks. The alternative approach addressed in this article — Whitelist Security — not only meets today‘s security

needs, it also allows antivirus companies to focus on more than just fighting off new malicious technologies.

The Whitelist Security Approach is based on a combination of our knowledge of the principles of development and proliferation of corporate threats, as well as a strong understanding of our customer‘s business needs and which forms of protection are needed to provide a reliable, balanced solution. The solution addressed below is different in that it is simple to integrate and manage, in addition to having a relatively low total cost of ownership (TCO) while providing a high level of data security. Implementing this approach not only required a thorough rethink of the decade-old “pursuit paradigm” but has also laid the groundwork for a fundamentally new stage in the development of application control technologies.

A components-based model of modern data security

Page 28: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org24

The components of modern security products

Reliable data security requires a comprehensive approach. Modern means of protection are comprised of several

components, each of which performs specific tasks. In this model, we can see four main groups of components: interceptors, antivirus engines, application controls, and cloud services. Let us take a look at the functions of each component.

Interceptors act like sensors to help antivirus products plug into a running operating system process, so that other antivirus protection components are able to scan objects and events as necessary.

The following serve as interceptors:

• Driver – intercepts requests made by applications for files. When a file request is intercepted, an antivirus product can scan the file for malicious code, or verify the permissions of this operation in line with the host-based intrusion prevention system (HIPS). If an application‘s actions lead to the detection of malicious code or unauthorized access, a driver can either block the file request, or prevent the application from launching.

• Network driver – helps maintain control over application operations within a network (preventing data leakages within a network, blocking network attacks, and so on).

• Plug-ins – libraries (modules) that can be added on to common applications (email clients, browsers, IM clients, etc.), and scan transferred data.

Engines are product modules designed to scan potentially malicious objects. There can be several scanning methods in one engine, and each antivirus developer may have their own list of and names for the methods they use.

The main types of engines are:

• Statistics analyzers that work to detect malicious objects according to certain statistics (more often than not, this is related to the structure of files in a particular format).

• URL analyzers that check to see whether or not a web address that a user is going to or that has been sent by email is in a malware, phishing, or other specific category of database (such as, for example, parental controls).

• Heuristic analyzers – a technology that makes it possible to use just one signature to detect numerous malicious files, including previously unknown modifications of malicious software. At the same time, it also boosts the quality of detection and reduces the volume of antivirus databases.

• Emulators – modules that execute program code in an isolated environment for subsequent behavior analysis.

In most antivirus products today, one data security component is a bundle of technologies that are part of the Application Control component. The latter works by using events from the interceptors, and then processing those events with help from other components:

Page 29: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 25

• Proactive defense module (PDM). The search for and detection of known malicious program behaviors (sequences, patterns) using malicious behavior pattern databases.

• Host-based intrusion prevention system (HIPS). HIPS scans each potentially threatening program action (most often atomic actions) against a list of rules which define whether or not those actions are permissible for a particular program. What‘s more, these rules can be customized for different types of software. For example, “trusted” software can do “everything”, while “unknown and suspicious” programs may be prevented from executing certain actions.

• Exploit protection is meant to protect computers against malicious programs that exploit vulnerabilities in software and operating systems.

• Enterprise Application Control (EAC). This module controls the launch of different categories and/or versions of software in line with different sets of rules and permissions.

Cloud services help expand the abilities of both engines and application controls. The cloud makes it possible to hide some of the scanning logic (in order to make it more difficult for malicious users to reverse-engineer the process and subsequently evade scanning).

It also helps reduce the amount of signature and behavior template database updates for users and clients.

Application control as a key tool for securing corporate networks

The Application Control module described above allows users to regulate application activity using HIPS policies that are initially set by the developers of antivirus product. Applications reviewed within the Application Control context are divided into

four categories: safe, threatening, strong restrictions, and low restrictions. These categories are used to define the level of restrictions that can be assigned to different applications (HIPS policies). Each category of applications is assigned a set of rules regulating application access to different resources (files, folders, registries, and network addresses). For example, if an application requests access to a specific resource, Application Control will check to see if that application has access rights to that resource, and will then act in line with the assigned rules.

Application Control also logs application launch records. This data can be used when investigating incidents and conducting various scans. An IT security professional or system administrator with this functionality at hand can promptly receive answers to the following questions in a standardized format:

• Which applications were launched and when over a specified period of time?

• On which computers and under which accounts did an application launch?

• How long has any given program been used?

Page 30: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org26

These are the component functions (capabilities and ease of use) that determine how effectively a system administrator can put into place and maintain various security policies.

The balance between freedom of action and security

When selecting a data security model, it‘s very important to achieve a reasonable balance between freedom of action and security. Home users value the ability to install and use any software without limitations. Although the risk of infection is then higher than with a strict set of restrictions, a home user stores only his own personal information and can make his own decision about the risks of his data being disclosed or leaked.

In contrast, corporate users handle information that is not their own. The more stringent the control, the lower the data security risks: the leakage or loss of data that is critical to a business could lead to a standstill in a company‘s business processes and, as a result, financial and reputation damages.

For companies, the balance between security and user freedom means a balance between potential

risk and the level of convenience for corporate users. Smaller companies tend to prioritize user convenience and, as a result, they minimize restrictions for them.

Meanwhile, larger corporate networks make security a top priority. Large companies use centralized security policies — standard rules for using corporate data resources. The ease of use is seen as less important than standardized software and maximum process transparency for sysadmins.

Typically, using a certain set of programs is sufficient for the needs of company employees. The ability to restrict software use to just those applications specified in rules set by an administrator and to block the remaining unwanted programs (unauthorized, illegitimate, or non-essential software) is a critically important option for corporate networks, not to mention command centers, industrial facilities, financial organizations, military agencies, and special purpose machines (such as ATMs and other types of terminals).

Maximum user convenience is offered in Default Allow, while Default Deny mode provides maximum security.

Software inventory results for a specific directory

Page 31: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 27

The traditional approach to protection: Default Allow

With Default Allow, the traditional choice for personal and corporate products, users can launch any applications except those that are blocked or those for which limitations have been put into place. This approach focuses on maximum user convenience. Clearly, the ability to launch any application will require high quality detection technologies. In the Default Allow mode, all of the security components noted above play a role in the analysis of the executed programs. This helps detect both known and unknown threats. At the same time, detection quality will depend on the developer of the antivirus solution being used. However, as noted above, antivirus companies more often than not are working to counteract virus writers; in other words, there is always some kind of malicious program that has not yet been detected by antivirus security. Furthermore, with the Default Allow mode, if a program is not on the blacklist, then its launch and execution will be, by default, permitted. That means that Default Allow mode presumes a certain degree of risk: code that is allowed to launch could pose an as-of-yet unidentified threat.

In addition to malicious programs, there are also legitimate programs that qualify as unwanted for certain networks. This type of program will not fall within the blocking policy, because in Default Allow if there is no special policy to block a program, the program will be able to launch on the corporate network without any restrictions.

Let us take a look at two examples of how a program that is not affected by security policies can lead to problems for a company. An employee might install an instant messaging client on his computer — Skype, for example. One of the distinguishing characteristics of Skype is the encryption of data transferred via communication channels.

That means that the DLP (data loss prevention) system is not capable of tracking the transfer of confidential information beyond the boundaries of a secured perimeter or identifying that information for the recipient. Antivirus technologies do not block the use of this application, as it is not a malicious program. However, a malicious user conspiring with a company employee could potentially gain confidential information from that employee using Skype as a means of data transfer.

Here is another example: Kaspersky Lab experts assisted in an investigation of an incident that took place at one company. No malicious programs were detected, but the reason behind the incident involved an IT staff member who had installed a legitimate remote administration utility on several workstations. The staff member had been fired over a year ago, but no one knew about the utility, which was still running and allowed the fired employee unauthorized access to the entire corporate network and all of the data stored on it. As a result, the maximum user convenience offered by the Default Allow mode leaves corporate networks vulnerable to unknown threats and unwanted software. Moreover, control over all executables requires considerable resources. However, in most cases, in order to perform their tasks, company employees really only need a restricted, specific set of programs. That means that a simple solution is the most logical one: only necessary, clean software should be put on a whitelist, and the launch of all other programs can then be denied by default. That application control mode is called Default Deny.

Default Deny mode

In contrast to Default Allow mode, Default Deny blocks the execution of any software that has not been pre-approved, i.e., on a whitelist. As a result, no unknown or unwanted software is permitted to launch.

Stages in the Default Deny life cycle

Page 32: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org28

Basically, when in Default Deny mode, a corporate network operates in an isolated software environment permitting only the launch of programs that are essential and sufficient for the company‘s business needs.

In addition, Default Deny also blocks the launch of malicious, non-essential, unlicensed, and unknown software, which reduces the costs of analyzing these types of programs, and which the Default Allow mode would permit.

When working in Default Deny mode, performance requirements for the systems under sysadmin control and for the resources required for analysis are substantially lower. As a result, the security system has an overall lower impact on network performance.

As can be seen in the graphic, unlike the traditional security model, Default Deny mode application control is carried out when permitted programs are launched, rather than during the execution process. Thus, data security risks are minimized at the earliest possible stages.

The main advantages of Default Deny:

1. The risk of malicious and unwanted software launching is minimized:

• U n k n o w n applications are blocked, including new modifications of malicious programs and those used in targeted attacks. As a result, a secure environment is maintained.

• The ability to block the installation, launch, and execution of illegitimate / unlicensed and n o n - e s s e n t i a l software, such as

a variety of chat clients, games, software versions with known vulnerabilities, and system “performance enhancers”. As a result, employees are able to stay more focused on their jobs and their performance improves.

2. Application analysis requires far fewer resources. As a result, the security system has a much lower impact on network performance.

3. Last but not least: Default Deny mode also pushes down costs, and essentially lowers the overall total cost of ownership of a security system: there are fewer bugs, fewer complaints, and a more manageable workload for TechSupport staff.

As a result, using this alternative approach with its powerful application control, a company can substantially boost the level of corporate network data security while considerably lowering costs for security system maintenance.

As we mentioned above, this is a fundamentally different and more proactive method which, in the opinion of Kaspersky Lab experts, could change the traditional perception of corporate network security.

Software inventory results for a specific directory

Page 33: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 29

It’s brilliant — but is it user-friendly?

The idea behind Default Deny is very simple and logical. However, until now this approach was focused exclusively on a very narrow target audience. This was primarily because of the technical challenges arising on the road toward developing a solution that would be appropriate for broad use, and without a number of critical limitations.

New developments in Application Control

Using Default Deny mode entails a shift in priorities when selecting security policies — moving away from user freedom and convenience toward accomplishing the main goal of any data security system: minimizing the risk of data leakages and/or the loss of any critical business data.

However, the early forms of strict application control system caused major function limitations that made the use of this arrangement nearly impossible. Maintaining a quality Default Deny mode requires additional functions, or else a corporate network may not be able to operate as needed. When transitioning to the Default Deny mode, a sysadmin will need to tackle a number of tasks. In order to make the process easier, Application Control — as the main component that will be managing a corporate network‘s applications — must first undergo some substantial changes.

So, the use of Default Deny mode is possible and practical only after the following functions are put into place:

• Inventory – collecting information about all of the software programs installed on the corporate network, all computers, and all network resources.

• C a t e g o r i z a t i o n – dividing up the inventoried software into functional groups: OS components, browsers, multimedia, games, etc.

• C o n f i g u r a t i o n , or Application

Management – introducing Application Control restrictions for certain users or groups of users for certain application categories (basically defining security policies).

• Dynamic Whitelist – a knowledge base about all of the varieties of software around the world. Current and regularly updated information about applications, their reputations, categories, and popular or recommended analogues. This is expert-level data that is supplied by security solution developers.

• Software security update mechanisms – autonomous means of maintaining regular updates for commonly used software, thus relieving administrators of the need to repeatedly go through the lengthy procedures of identifying and legalizing applications that are updated on the network.

• Maintaining a list of trusted users and software sources – tools provided to network administrators, security engineers for simple, convenient ways to legalize software. In particular, this involves creating lists of trusted network resources on the Internet and the local corporate network (HTTP/FTP/Shared Folders/etc.) – sources of clean apps that are permitted for installation and use.

Option for user file categorization

Page 34: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org30

• Testing and supporting a beta mode – a means of static or dynamic detection of systemic collisions arising during the introduction of security policies (incompatibility / inoperativeness of various applications) that could lead to interference

in business processes.

• User feedback and support – tools for managing incidents to ease the user-support process as much as possible.

• Monitoring and audit – various tools for collecting, aggregating, and standardizing reports.

We will now address the Whitelist Security Approach component in more detail. WSA is a dynamic database of clean files (a Dynamic Whitelist).

Interest in Dynamic Whitelists stems from both technical and organizational aspects, without which it would not be possible to achieve the Whitelist‘s maximum effect.

The Dynamic Whitelist

So, what is a Dynamic Whitelist? It is essentially a knowledge base about all of the different types of legitimate software programs. From a technical point of view, a Dynamic Whitelist is an enormous database of “clean” software that can be continuously updated with different types of files, including new installation files, and — most importantly — information about these objects. The quality and completeness of the data in these types of expert-level data resources depends on their suppliers. Leading security software developers are the ones compiling Dynamic Whitelists.

An example of a message automatically sent to a system administrator in the event that an application is blocked

Page 35: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 31

A Dynamic Whitelist is a necessary component for three of the seven tasks that need to be accomplished to make Default Deny work (see the table above). Clearly, the quality of the solution provided by a vendor will have a direct correlation to the quality of the database in that solution.

To accomplish the tasks above, the Whitelist should contain:

1. A software database: the developer, product name, latest version, other information primarily derived from the attributes of each program

2. Additional information (expert-level data):

a. information about the risk level – software classification, its reputation: trusted, untrusted, potential threat, etc.

b. software category: OS component, browser, games, office apps, etc.

c. the software‘s business purpose: accounting and finance, marketing, HR, CRM, logistics, etc.

d. software alternatives – data about similar programs

e. statistics – prevalence, regional distribution, etc.

What other requirements are there for Dynamic Whitelists aside from data?

First and foremost, a Dynamic Whitelist should be dynamic, as its name implies. Each day, multiple new legitimate applications and updates for existing

Kaspersky Lab’s category catalog

Page 36: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org32

applications are released. That means that security software developers have to respond immediately to any changes in the software world, and promptly update their knowledge bases.

In order to do that, they must also regularly and promptly add entries to their databases of clean software programs from the many sources from different parts of the world. These updates must take place automatically, since the volumes of data are enormous (terabytes of data every day).

For this purpose, suppliers of Dynamic Whitelists send so-called ̳crawlers‘ out onto the Internet — crawlers act as search agents that monitor new software and, when needed, download new applications.

To keep databases up to date, it is also important to develop technological partnerships among vendors and major manufacturers and distributors, i.e., independent software vendors. The goal of these partnerships is to obtain, process, and analyze (classify and categorize) new software before it is publicly released in order to minimize any false positives or instances where a security solution and a partner‘s software are incompatible.

Another possible source of data for keeping databases up to date is a global data network created by a vendor based on user communities. This type of data network offers a major competitive edge — it helps track metadata about the software launched on user computers, and it adds data on the emergence of new apps and different software updates to the knowledge base.

It is necessary to carefully control all of the programs entered into the Dynamic Whitelist, and most importantly to keep its reputation up to date. A program classified as ̳clean‘ today can, after more careful analysis, turn out to be a carrier of threatening malicious code tomorrow.

Note that regularly scanning the Dynamic Whitelist is no small task. In addition to automated data processing and analysis, it also requires a team of specialists capable of analyzing program code with potential logical collisions and issuing a final verdict. Small companies and developers of “free” antivirus products cannot afford these types of dedicated antivirus labs. Furthermore, the specifics involved in processing malicious and clean software are different. Ideally, a company will not just have

a dedicated antivirus lab but also a specialized Whitelisting lab where experts track incoming data flows, study intellectual systems, and respond promptly to emergencies (Kaspersky Lab has such a dedicated Whitelisting Lab).

From theory to practice: Kaspersky Lab’s Endpoint Security 10

Corporate network admins are faced with complex, often repetitive tasks to support numerous, multi- purpose workstations. The Whitelist Security Approach (i.e., Default Deny mode) guarantees a much higher security level for corporate networks. Furthermore, running a network on Default Deny mode, with its strict system restrictions, requires that the products involved in the system are capable of large-scale task automation to facilitate system administration.

Let us take a look at how the transition from theory to practice is made using Kaspersky Lab‘s Endpoint Security solutions as an example, following the program step-by-step through its life cycle, from software inventory to corporate network maintenance (after product installation).

Currently, exploit protection is included in just a handful of antivirus products, but at Kaspersky Lab, we believe this type of protection is critical. Kaspersky Lab‘s exploit protection module is called Automatic Exploit Prevention (AEP). It uses an analysis of exploit behavior and gives users extra control over applications that are more susceptible to malicious user attacks. AEP prevents exploits from launching. If an exploit does somehow launch, AEP blocks them from carrying out any malicious behaviors.

Page 37: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 33

Whitelist Security Approach was first used in Kaspersky Endpoint Security for Windows 8 in 2011. In 2013, Kaspersky Endpoint Security for Windows 10 offers even more functionality, including in the area of Application Control.

• Inventory. Right after the product is installed, an administrator will need to conduct an automatic software inventory of all of the software installed on the corporate network. Here, Kaspersky Endpoint Security collects information about all of the software programs installed on network computers and other network resources.

• Categorization (automatic, using Dynamic Whitelist). After the automatic inventory, all software programs are then categorized, and divided into groups according to the rules set out in the product (OS, multimedia, devices, games, browsers, etc.). This function is not offered by all Application Control solution developers. We believe, however, that this function is essential in facilitating the management of a large number and variety of software programs installed on corporate networks. That is why the Dynamic Whitelist has 16 different categories at the top level and 96 sub-categories (see our category catalog).

In order to define the critically important OS components and drivers, Kaspersky Endpoint Security includes a special category of OS files called Golden Image. This category includes all of the requisite components for Win XP, Vista, Win7, Win8 (32 and 64) and over 15 localizations for each (over 100 versions and localizations). All an administrator has to do is add files into the Golden Image category from the local Whitelist database, and the Default Deny configuration is ready.

• Categorization (manual). It‘s important to take into account the fact that a Whitelist from any security solution developer cannot contain data about every single software program installed on a company‘s network. For example, companies often have specialized proprietary software developed either by the company itself or custom-made. Kaspersky Endpoint Security 10‘s Application Control allows admins to create a local Whitelist.

Furthermore, Kaspersky Endpoint Security 10 also features multi-vector categorization — in other words, one application can be in several categories at once.

• Configuration. Kaspersky Endpoint Security offers the ability to manage categorized software for specific users and groups of users. For example, software in the Accounting category can be approved for accountants only, so that no other users will have access to the company‘s financial data.

It is at this stage that the use of unlicensed or non-essential software can be restricted. For example, it is possible to block the use of any software for which the company does not hold the requisite licenses, or block instant messaging programs like Skype, for example. It is also possible to block the use of specific versions of software, such as blocking all browsers except for a specific version of Internet Explorer.

• Secure software updates. The automatic software updates in Kaspersky Endpoint Security features Trusted Updaters technology, which helps perform secure update procedures for products while factoring in the complex sequence of program actions executed during the update process.

• Testing and maintaining a test mode. Since introducing strict network policies involves a good deal of responsibility, we have also incorporated a special Test Mode that a system administrator can use for modeling and evaluating how any rule works in practice. Applications aren‘t actually blocked in Test Mode, but an administrator can see what the system looks like based on reports, and what would happen if a particular system arrangement were running. This helps sysadmins make the appropriate adjustments to the rules before officially rolling out new policies without causing any negative responses from users and without interfering in company business processes in the event that work becomes more difficult as a result of new policies.

• User feedback and support. A company‘s IT environment is ever-changing, which is why users should have the ability to ask a system administrator for permission to launch new software, and sysadmins should be able to either block or allow that request by simply pressing a button in a user-friendly interface. Kaspersky Lab‘s product provides the ability for both. To ensure flexibility — even in Default Deny mode — Kaspersky Lab allows

Page 38: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org34

administrators to manage user complaints and requests. If it turns out that a certain application is blocked but the user believes that application is essential to perform his official duties, all he has to do is click on “Send a Request” and a notification will

automatically be sent to the administrator.

Conclusion

The increased number and, most importantly, the complexity of threats means that antivirus software developers have to search out new solutions to provide corporate networks with effective protection. Whitelist Security Approach is a new method that allows only trusted, whitelisted programs to launch and run. As a result, a malicious program cannot simply launch on a system. This approach provides protection against complex and unknown threats, including targeted attacks.

Whitelist Security Approach (WSA) is a new development in Application Control that complements the Default Deny mode and Dynamic Whitelist technologies. Using the heightened-security Default Deny mode means introducing additional functions.

Application Control should include several simple mechanisms, such as inventory, categorization, configuration (Application Management), flexible local whitelist policy management, and the ability to use a cloud- based Dynamic Whitelist, capable of responding immediately to regular changes in the software world. Furthermore, functions such as testing and support in a Test Mode are important for making the transition to Default Deny mode properly.

Whitelist Security Approach helps system administrators accomplish a number of tasks:

• Control (allow, block, flexibly restrict or audit) the launch of clean apps on workstations in line with the company‘s security policies

• Receive expert-level data about the status of files from the Dynamic Whitelist database directly from software developers

• Guarantee steady operation of clean, permitted applications

• Manage software categories, rather than specific applications

• At the stage of corporate usage, monitor, control, and respond to problems that result when an application is blocked

• Streamline the use of company IT resources and boost performance by controlling the use of third-party and unlicensed software on the network

Application Control and Default Deny mode together are powerful, convenient tools that simplify a system administrator‘s job when it comes to managing corporate network workstations and keeping them secure. At Kaspersky Lab, we believe that the Whitelist Security Approach is a key tool in the corporate network security of the future. At the same time, we believe there is no panacea or one single technology capable of protecting computers against all threats.

That is why the best choice for corporate networks is the use of a powerful endpoint product that combines a variety of protection technologies. Only multi-level system security and control can provide the highest possible level of protection for corporate networks.

To know more in details, visit at http://whitelisting.kaspersky.com/

Over the last year, independent test labs have started working more with Application Control. Right away, two companies verified the effectiveness of Application Control technology for protection against targeted attacks and managing unauthorized software.

In early 2012, West Coast Labs published a report on the results of the industry‘s first independent test, where Kaspersky Lab‘s technology ranked first. Later, Dennis Labs also conducted a comparative study, and in early 2013 released the results. Once again, Kaspersky Lab was ranked among the best.

Page 39: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 35

Transforming AnalyticsChallenges and Opportunities for the Banking SectorBy Margarita AnagnostopoulosVP – Business Analytics, Thakral One Pvt. Ltd.

In 2011, Bangladesh Bank, the central bank of Bangladesh, announced plans to license new banks into the country’s financial system – a move that was expected to increase the spread and penetration of formal financial services, augment corporate credit capacities and improve the overall quality of banking for citizens.

As it stands today, the license program has successfully increased the number of commercial banks in the country by almost 30%, and consumers are starting to reap benefits. However, this has also significantly intensified competition between banks. With new entrants at the heels of incumbents, banks need to find ways to serve customers more effectively, and completely, and optimize their risk profiles while doing so. This requires an intimate and “live” understanding of the customer segments that they currently serve, and the segments that they strategically intend to serve to ensure long term profitability. While customers are getting more sophisticated, banks need to keep up to stay relevant.

Consequently, bank CIOs are seriously considering further investments into tools that can help to make sense of the abundant data that is residing in their systems. Data warehousing solutions, Business Intelligence (BI) suites and predictive analytics tools, enabled by a more robust technology infrastructure, are now competitive necessities. Banks are looking to build enterprise wide performance management capabilities to generate insights that can empower both management and frontline staff. The latter have a huge opportunity to transform customer data into insights that can optimize and align customer relationships across channels and transactions.

In addition to the intensified competitive pressure, the current regulatory environment makes it almost mandatory to integrate risk management into an overarching analytics infrastructure. An integrated risk analytics solution will help banks quantify and hence, manage risks at the product, portfolio or organizational level, and optimize performance to match a pre-defined risk profile.

The support of an experienced consulting partner will be invaluable to achieve the abovementioned objectives. While selecting the right analytics solution is key, the partner must possess the expertise and experience to institute processes, as well as the discipline required to ensure data is properly managed - from raw numbers to concrete and actionable insights. There are numerous documented case studies of failed Business Intelligence (BI) initiatives and expensive (but low ROI) investments into analytics solutions.

Further, once these tools are successfully implemented, domain experts will be required to train users and drive change management initiatives to ensure enterprise-wide adoption. In certain cases, consulting partners can also assist to “manage and monitor” the processes actively post implementation, to secure the maximization of the investment.

References:http://asianewsnet.net/Bangladesh-Bank-gives-green-light-to-ninth-new-ban-48129.html https://globalconnections.hsbc.com/global/en/tools-data/treasury-management-profiles/bd-2013/banking http://www.bangladesh-bank.org/recentupcoming/news/apr092012newse2.pdf http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/us_consulting_Analytics_in_banking_102711.pdf

INNO

VATI

ON

Author Details:

Margarita AnagnostopoulosVP – Business Analytics

Thakral One Pvt. Ltd.

Page 40: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org36

Using RFID Technology for Digital BangladeshBy Professor Dr Syed Akhter HossainHead of the Dept. CSE, Daffodil International University

Radio Frequency Identification Device (RFID) systems have been used quite extensively for various types of applications which involves device or document or product tracking, identification, access management, etc in complex industrial environment. These applications have to interface with different RFID hardware devices including readers and tags, to get data. The RFID devices have different connection-interfaces to connect to the system. Besides, different tags and readers use different protocols to communicate often in different frequency ranges namely Low Frequency (LF),

High Frequency (HF) and Ultra-High Frequency (UHF) as well as some hybrid band. This has been found that porting of an application becomes very difficult when it is migrated from one hardware to another for the same application domain. This poses a great hindrance to wide adoption of RFID in small and medium enterprises (SME) as the cost of porting becomes very high. In many cases, there are a large number of parameters of the RFID hardware which have to be configured before the system can be used efficiently based

on device characteristics. All these parameters include antennae to be used, the power level, the communication parameters, sensor circuit etc. In order to configure these parameters, the application has to be aware of the RFID device it is using.

An open-source RFID application framework allows us to create RFID applications without worrying about the hardware and protocol intricacies since hardware variations are norms to RFID. Besides, this also allows to interface multiple applications

with the RFID tags and readers in a technology-neutral, in terms of protocols, air-interface, etc, manner. The proposed system supports multiple applications simultaneously interacting with one or more readers or even a part of the reader. It is possible, to build applications having different analysis and processing requirements using this application framework as shown in Figure 1.

The term Radio Frequency Identification (RFID) describes a system which is used to tracks objects,

Figure 1: Sample RFID Application Framework Figure 2: RFID Systems Components

INNO

VATI

ON

Page 41: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 37

where the identity e.g. unique serial number and the data of the object is transmitted wirelessly using radio waves. In this scenario, data is stored on an electronic data-carrying device called Tag, which is attached to the object being tracked or monitored. The power supply to the tag and the data exchange between the tag and the reader are achieved without having any physical contacts with the reader devices, rather using alternating magnetic or electromagnetic fields for the same with the radio frequency.

RFID provides an efficient way of automatically identifying the objects. This great property of the RFID devices has enabled it to be used in many applications concerning identification and tracking.

Among different applications, one of the most widely used application area is the access control systems, where RFID based plastic cards are used to identify and authenticate the card-holder’s entry to the facility.

The RFID systems are extensively used in the warehouses and stores for the supply chain management, inventory management and movement management. This has led to huge increase in the efficiency of the warehouse operations and keeping the optimum inventory in the stores.

The RFID systems component shown in Figure 2.0 covers all different sub-components. Generally RFID Tags come with varying amounts of storage

capacities based on application. This storage capacity of tags ranges from the 1-bit which most often is used for the theft prevention in anti-theft RFID system, to a few kilobytes, used in short range access control management systems. This tag memory also can be either read-only, write-once or rewritable. The read-only tag memory is set to a particular value at the manufacturing time which is confined within the package of RFID tag. On the other hand, the value can be set once in a write-once tag memory. After setting the value, these tags act like read-only tags. This write-once type of tags are generally used to track the objects which have to travel across multiple organizations and the tag information needs to remain the same.

The core system component RFID readers, also called interrogators, are used to recognize the presence of RFID tags within the range and communicate with the tag and thereafter exchange data. This communication is done by transmitting and receiving RF energy. In the process an antenna of a nearby tag picks up this energy, and then converts it into electrical energy via induction. This electrical energy powers the semiconductor chip attached to the tag antenna, which stores the tag’s identity and other data for later processing. In order to communicate with the tags, an RFID reader needs to have one or more antennae. Some readers have only one antenna, while other readers are able to support many antennae which

Figure 3: Different RFID Tags Figure 4: RFID Compact Tags

Page 42: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org38

can be placed at different locations. The limitation on the number of antennae, a reader can control, is the signal loss on the cable connecting the transmitter and receiver in the reader to the antennae

In the present business context of RFID, the RFID readers are available in different frequency ranges namely LF, HF, UHF, etc. This operating frequency range of the LF readers is less than 135 KHz. The ISO standards are ISO11784 and ISO11785 define the LF based RFID infrastructure. The LF systems are used in tracking of animals. The HF readers operate at 6.78 MHz, 13.56 MHz, 27.125

MHz, 40.680 MHz, etc. Some standards defined in this range are ISO14443 and ISO15693. The HF based systems are used in applications involving ID and access control management.

The UHF readers operate in the range of 300 MHz to 3 GHz. The important frequencies in this range are 433.920 MHz, 869 MHz and 915 MHz. The standards defined for UHF based RFID tags in this range include ISO18000 and EPC Gen2. The UHF based RFID systems are used in application involving object tracking, and asset and inventory management.

RFID Software Applications

RFID software applications are the software components of the system which provide necessary call to RFID system supported API to perform

necessary task. These applications require the data to be collected from the RFID hardware and middleware. The applications then apply various business rules on this data to extract information. The business rules vary in complexity depending upon the application’s requirement.

The applications interface with the readers using the proprietary APIs provided by the reader manufacturer. There has not been any standardization on the reader API front, making it necessary for the application developer to know about the various reader APIs available and to change the application code in order to make the

application use the services of other type of reader. Each of the connected readers keeps reading all tags in the read range. This can result in thousands of RFID read observations per second including the RFID tags being read multiple number of times. In addition to the sheer volume of data, the raw observations need further processing to be meaningful for the enterprise applications.

RFID Middleware

RFID middleware is an application when interfaced directly with the RFID reader, has to process humongous amount of data. Such applications also have to take care of different connection-interface used by the reader as well as protocols. Handling these different interfaces becomes a huge task as interfaces require different APIs to program. In the entire RFID eco-system and

Figure 5: Component of RFID middleware Figure 6: RFID in action for security management

Page 43: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 39

infrastructure, an important component is the RFID-specific software named RFID middleware. This middleware basically translates the raw data captured from the RFID tag, may it be passive or active, into useful enterprise information which can be further utilized for the targeted use. This information can then be fed into other databases and applications like inventory management, ERP, CRM etc for further processing. In specific case of read-write RFID tags, the software is also required to control whether data can be written to the tag, and which tag should contain the data that initiate the process of adding data to, or changing data in the tag.

In order to make applications independent of various types of readers and their connection interfaces, there is a need of an intermediate layer between the RFID reader and the application. This requirement is fulfilled by a software unit or layer called the RFID middleware which provides the following:

• A device characteristic independent interface to the application based on API

• Processing of the raw data and reporting only the aggregated and meaningful data as configured by the application

• Providing an application-level interface for managing readers and querying RFID observations.

The component of RFID middleware is shown in figure 5 and the following sections elaborates briefly on the sub-components.

Reader Interface

Through reader interface, the applications can interface with readers connected to the system by making use of the various APIs provided by the reader. This however needs good efforts of application porting based on API. The reader interface component of the middleware provides the means to eliminate this effort by exposing a single abstract interface to the applications for simplified reader interface.

Event Management

Event management and co-ordination plays important role in middleware functions. An RFID-enabled environment has several readers employed for identifying the tags to track objects based on the enterprise requirements. Each of these readers transmits RF signals several times a second in order to read the RFID tags around them. This can result in thousands of RFID tags being read per second. In case of exposing raw observations from the readers to applications, this will require enormous processing at the applications end.

In addition to the sheer volume of data, the raw observations need further processing to aggregate data and present only the meaningful data to the enterprise applications for appropriate processing. As the RFID technology is still not immune to data losses, it is possible that in some cycle a tag is identified whereas in other cycles, it may not be identified. With raw processing at the applications, the application will have to continuously adjust to the fluctuating observations coming from the readers.

These raw observations from various RFID readers therefore lack the meaningfulness for the applications. As a result more processing needs to be done to map these raw observations to coarser events that are meaningful to applications. In this scenario a middleware helps by consolidating,

Figure 7: Different RFID tags

Page 44: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org40

aggregating, and filtering the raw observations coming from different readers and sensors and provides application-level context. The process of smoothing out the raw RFID observations coming from readers to make them more

meaningful for enterprise applications is called event filtering.

Application Interface

Application interface provides handshaking with the service applications. These applications require a service-oriented interface that provides application-level semantics to the collection of RFID data. Following the principles of service-oriented architecture, this interface has to be loosely coupled and asynchronous.

RFID Standards

The RFID tags are used for tracking the objects not only within the organization but also across organizations. Therefore, the tags have to be read at different places and keeping same type of reader at all the place is not possible.

In order to read the tags at different places and to enable the reader to communicate with different tags, there is a need of standardization which is followed across different industries.

These standards define, among other things, the various packet formats for communication, transmission protocols, the initialization, the anti-collision mechanisms etc to be governed by specific implementation. There are several standards defined by the ISO. The ISO is the worldwide union of national standardization institutions.

There are different standards defined based on operating frequency ranges and on the application for which the RFID system is intended for. Some relevant standards are the following:

1. ISO 14443: Used for proximity coupling smart cards operating at 13.56 MHz, and offer maximum read distance of 10cm.

2. ISO 15693: Used for vicinity coupling smart cards operating at the 13.56 MHz frequency, offering maximum read distance of 1 meter.

3. ISO 18000: Used in case of RFID for item management: Air Interface operating at different frequency ranges

4. EPC Gen2: Used in case of UHF RFID protocol for communications at 860 MHz to 960 MHz.

The RFID system has several variables like - interfacing protocols, different reader connection-interfaces, different tag types and memory organization as required in any specific application area.

These different features result in difficulties to make and to port applications to multiple readers in an enterprise usage. The RFID middleware software component in the system allows to overcome these variations and difficulties by giving an abstract view of the RFID hardware to make the development of RFID enterprise applications easier

RFID System Design

RFID system design has a three-tier architecture as shown in figure 6 with application framework, middleware called SmartRF and the RFID hardware being its components. The RFID hardware consists of the RFID readers and the RFID tags. The middleware is a software component which

Figure 8: RFID System Design Architecture

Page 45: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013www.ctoforumbd.org 41

provides a device independent interface to the applications for accessing the RFID hardware. The intended application framework uses the services of the middleware to present a generic application building environment for ease of operation.

The framework uses a device independent visualization of the RFID hardware provided by the middleware to build the applications using well defined API. As a result the RFID applications are independent software which use the services of the Application framework for accessing the tag data read through the hardware to implement the business logic using a generic application framework.

The RFID middleware provides a device neutral, protocol and platform independent interface to the application framework for ease of implementation. This provides the following functionality to the application framework:

• Responsible for interaction with the hardware and provide access to the devices and tags through tag and reader abstraction layers

• Provides the application framework with the tag data from various reader devices and reports the data after applying application specific filtering of the raw data.

• Provide a set of functions to the applications to interface with the RFID hardware, in a device neutral manner.

This application framework defines a generic way to build RFID applications which can be categorized

based on their requirements. The framework described here grants ability to develop applications catering these kind of requirements with the help of middleware. The application framework interfaces with the middleware to collect data, read by the RFID readers. The framework also has to provide data to the application to process and present it to the end-user. The application framework takes the configuration parameters specific to the application and passes it on to the middleware.

Conclusion and Upcoming

In this article, Radio Frequency Identification (RFID) application framework has been discussed from the view of middleware. It is clear that RFID middleware plays a crucial role in developing RFID enterprise applications. But the challenges for heterogeneous device integration continue with the continued development of middleware component.

This article reveals a good understanding of the RFID application framework and elaborated on different RFID standards and the role of different sub-components of the framework. In the future article, another detail glimpse will be presented to the readers on using RFID application framework for the successful implementation of enterprise applications for digital services in Bangladesh.

The author is engaged with RFID technology based research and implemented different RFID based solutions in the area of Access control, asset tracking and management, document tracking and other areas.

Author Details:

Professor Dr Syed Akhter HossainHead of the Dept. CSE, Daffodil International [email protected], [email protected]

Page 46: Cto magazine volume1 issue3

INNO

VATIO

N

April - June 2013 www.ctoforumbd.org42

LEAD

ERSH

IPHow CIOs and CTOs should Improve their Businesses with the Private CloudBy Dan Woods, Contributor

The real story of the cloud has not been told in most industries. It is not about technology but about new ways of running a business. Adopting flexible infrastructure is much easier than understanding how that flexibility translates into business value.

The most dramatic acceleration of cloud adoption will be driven by business success. When a company in a particular industry shows a business advantage based on the flexibility and speed of cloud infrastructure and uses that advantage to pummel the competition the rest of the industry will be forced to follow. The transformation will involve core business processes not just IT infrastructure improvements.

But how to be the company that does the pummeling rather than the victim? This should be a vital question for CIOs and CTOs and it will require a new kind of leadership. Remember, business executives will not be able to imagine how more flexible infrastructure can change business processes. CIOs and CTOs must take on this innovation and design challenge. To do so, they will have to understand more than they know now about the key business processes in their companies and search for the opportunities to improve those processes with the flexibility of the cloud. This is no small task and will provoke discomfort all around, but the CIOs and CTOs who pull this off will transform their role in their companies and help win the next round in the marketplace.

My recommendations for how to find these opportunities are based on four observations, the first two about the cloud, one about the way that CIOs and CTOs must improve their management practices, and one about the misunderstood discipline of value-driven Business Process Management.

The first observation is that the cloud is immature. The starting point for any exploitation of the public cloud must acknowledge the fact that the cloud is only ready

for a portion of today’s business computing. Very few businesses will run their entire businesses entirely in the public cloud in the next 10 years because the public cloud is not ready to host applications as they are currently architected and a wholesale rewrite isn’t going to happen overnight. Brad Reinboldt, in his Forbes.com article “Virtualization And The Cloud: The Trouble Is Troubleshooting” reported the results of a survey about difficulty troubleshooting applications that run in the cloud. In my “ Cloud Don’ts: Advice to Avoid Wasted Investment” column, I pointed out how the infrastructure of the public cloud is sorely lacking with respect to diagnostic information compared to the kind of infrastructure that is inside a data center.

A perfect example of the problems one faces when moving applications as they are currently run to the cloud appeared in the New York CTO Club mailing list recently. One of the CTOs reported that he was unable to diagnose a problem with an application using an NFS file system based on cloud infrastructure. The public cloud must also become more instrumented so that such problems can be diagnosed. If such

instrumentation were easy, it would have happened already. It is not. The respondents to the post pointed out how they have had to rewrite applications to make them cloud ready.

The transformation involved moving from the paradigm in which the application ran with a local file system to an architecture in which any data needed was fetched in advance from slower and more variably performant remote storage and then storing what was needed in memory. This sort of rewrite is easy with some applications and hard with others. Companies like Gluster have shown it is possible to create a file system on cloud infrastructure that is cheap and reliable, but this will only help a certain class of applications. Even with this capability many applications will have to be rewritten.

Page 47: Cto magazine volume1 issue3

LEAD

ERSH

IP

April - June 2013www.ctoforumbd.org 43

Another problem is variable performance. Anyone who has used the public cloud seriously has encountered vast difference in performance for no apparent reason. One CTO I spoke with recently said that identicalHadoop clusters running in the cloud can process identical workloads in 20 minutes or as long as five hours.

Another said that in one application, he adopted the practice of acquiring dozens of virtual machines, testing them for performance, and then using only the speedy ones. There is no way around the fact that shared resources will perform less predictably that dedicated ones. Cloud vendors are attempting to address these problems by offering dedicated instances of services and other remediations, all for a higher prices. It will take a while to learn to write stable, high performance applications out of parts that perform in unpredictable ways.

On the other hand, the second observation is the the public cloud cannot be ignored. I also argue in my “Cloud Don’ts” article that the public cloud will win in the end. The economies of scale of public cloud infrastructure will crush the private cloud in the long term. But the long term may be 10 years out. It doesn’t make sense ignore the cost advantage of the public cloud. Whatever you build should be able to be migrated to the cloud when you are ready.

The third observation is that CIOs and CTOs are rushing to the cloud without understanding the economics of the alternatives. The cloud offers great speed when it comes to getting things running. But few CIOs and CTOs can product a detailed cost analysis of running an application in the cloud for several years compared to the internal costs of running the same application. The cloud part is easy to calculate. But the cost of running the application internally is much harder to compute. Often, the first tier of applications to go to the cloud are those that are not that critical. If these applications are low priority couldn’t they be run on machines that run at low capacity or are used for disaster recovery? When the demand on those machines rose or when disaster recovery scenario occurred, these applications could be taken off line. Few CIOs or CTOs are able to do defensible comparison of the costs.

I saw a presentation by Sean Hull, an expert in database and cloud computing performance optimization who publishes the iHeavy newsletter. Hull argued that most companies go to the cloud for economic reasons but then, once their applications are running there, they face a variety of performance and optimization challenges they are not ready to face. Hull also pointed out that assuming the cloud is always a cheaper way to run an high performance applications is not always correct. For example, a large class of disk intensive applications will be cheaper

to run at consistently levels of performance with traditional dedicated hardware because of the cost of additional cloud resources needed to achieve consistent performance.

The fourth observation is that value-driven business process analysis must become a tool for CIOs and CTOs to focus their energies. I’ve been working this year on a book with two Accenture

consultants, Mathias Kirchmer and Peter Franz, who have developed a practice of applying the methods and tools of BPM using a focus on value creation to guide the analysis and implementation. CIOs and CTOs can learn from this practice. Before thinking about where to attempt to apply the flexibility of the cloud, CIOs and CTOs should first understand which processes are most important to creating value. By focusing on these processes, the results of any innovations will have the biggest payoff.

So, in light of these observations what is a reasonable path forward? How can CIOs and CTOs lead the way to translating technical capability of the cloud to business success?

For CIOs and CTOs at most companies, I think the following steps make sense:

First, CIOs and CTOs must prepare to defend their analysis of the cloud by improving management and accounting practices. In other words, it is time for the IT function to do what it has recommended the rest of the company do, that is create accurate models of its activities and use them to better understand costs and track processes so efficiency and productivity can be continuously improved. (See “Applying

Page 48: Cto magazine volume1 issue3

LEAD

ERSH

IP

April - June 2013 www.ctoforumbd.org44

Business Intelligence to the Data Center” for a suggestion for improving modeling of IT infrastructure.) A crop of new products has been developed to help this happen. nlyte Software and Power Assure both have products that help create more accurate

models of the data center for energy management and other programs of optimization. Precise Software offers a performance management database that can track everything related to application performance in one coherent repository. Apptio is addressing the problem of the weakness of IT management through creating a comprehensive suite for tracking IT spending and value from end to end. Sunny Gupta, Apptio’s CEO, asserts that a new discipline of Technology Business Management must be created. The point is, no matter what tools are used, that questions about where IT spends money and how IT infrastructure creates value can no longer be a mystery.

Second, IT must better understand what processes are most important to optimize. This is tricker than it sounds. Every VP in a company is normally clamoring for attention and thinks their problems are most important. CIOs and CTOs must lead the way in helping their companies understand what really is core to their business and what is context, to use Geoffrey Moore’s formulation set forth in “Living on the Fault Line”. But how to do this? In my recent studies of BPM, it seems that process analysis shows the way. What processes are most important at your company? Which are differentiating? In which processes areas do you lead the industry? In which are you behind? The high priority areas should be those processes crucial to value creation in which you are behind or falling behind.

Third, use private cloud infrastructure for experiments in how to make applications better and more responsive. This is the part of the process that is reliant on innovation and invention. It is likely that cloud based custom applications built of the abundance of componentry that is now available will be key. The point is to start with the business value needed and work back toward technology. Find out what questions need to be answered, what data is needed to answer them, what analytics can help explore that data, how more increased visibility would allow more optimization and automation. The business side doesn’t know what the cloud makes possible. CIOs

and CTOs who do know must marry that knowledge with a more intimate understanding of the business. The reason you want to use the private cloud as the foundation is that it is more instrumented and stable. Cisco’s Unified Computing System, for example, provides virtualization of the entire data center, including the hardware and networking layers. This provides a new frontier for describing entire data centers using meta data. With such capability you can move and reconfigure not just virtual machines but hardware and networking configurations. Unlike the public cloud, everything is fully instrumented. Unlike the public cloud, performance will not vary mysteriously. There are many other private clouds coming to market that offer similar sorts of benefits.

It is vital that the solutions built on the private cloud do not become a point of lock in. Standards efforts such as the Apache Foundation’s Deltacloud should help. CIOs and CTOs should keep their eye on the evolution toward maturity of the public cloud and keep close track of the costs of the private cloud so that rational decisions can be made about where to provision specific types of applications and functionality.

In the end, the value of the cloud can only be realized if a sophisticated understanding of the costs and capabilities of the cloud and married to a sophisticated understanding of the way the processes can and should create value. With this understanding in mind, invention and innovation is possible. My strong belief is that this mixture of information can only be brewed in the minds of CIOs and CTOs after methodical preparation. If done right, the payoff is not just a new technology infrastructure, but a new business.

Dan Woods is chief technology officer and editor of CITO Research, a firm focused on the needs of CTOs and CIOs. He consults for many of the companies he writes about. For more stories about how CIOs and CTOs can grow visit CITOResearch.com.

Source: Internet

Page 49: Cto magazine volume1 issue3

LEAD

ERSH

IP

April - June 2013www.ctoforumbd.org 45

A superstar CTO will contribute to the success of any company, but is particularly important for an enterprise whose customer experience is enabled by technology. At Backcountry.com, an online retailer, we recently hired C.J. Singh as CTO, so the qualities that make a great CTO and the facets of a successful CEO-CTO relationship are top-of-mind. Technology expertise would seem to be a CTO’s primary contribution, but it is only one among many intertwining responsibilities. As one of the most critical members of any executive team, the CTO should be expected to contribute strategy and counsel to the CEO. As head of the engineering department (called IT in many companies), the CTO must understand and support company strategy as it’s aligned with the mission and vision. He or she has to communicate that strategy clearly and effectively, lead the team in implementing that strategy, and hold the team accountable to overarching company metrics.. On top of it all, the CTO needs to run a lean and efficient operation, keeping costs low, output high and the CEO happy.

A successful CTO as a leader must do the following:

1. Align to the Strategy: A good leader can work strategically in the best interests of the company as a whole. The best CTOs are competent strategists who don’t just focus on engineering initiatives but make decisions that align with the company’s overall vision. They will be able to communicate these decisions and the reasoning behind them to their team, while rallying support for the company’s goals. Sometimes objectives will overlap and sometimes they won’t. The CTO needs to be able to differentiate and execute only on what’s pertinent.

2. Contribute to the Innovation Discussion: Each department has different responsibilities and varying strengths, but each makes an integral contribution as part of a cohesive unit. Critical conversation among the leaders of each department is key to integration success. Obviously, the CTO must be able to clearly communicate from an expert engineering perspective, but he or she also must be able to hear and understand other points of view, concerns and needs with regard to the entire customer experience. Responsible communication will likely lead to innovation and should increase the reach and usability of the company’s technology.

3. Be Agile and Deliver Results: The CTO must lead the engineering department in a quick and creative response to necessary change, while keeping the company’s business goals clearly in sight at all times. The difference between a great engineering department and an ineffective one is agility and the ability to accomplish what is expected in a timely

manner. A great team under a great leader will get the job done every time. The CTO must deliver results despite unforeseen obstacles and clearly communicate project status to both engineering employees and the executive team. Any unexpected and unnecessary lag time will have a negative impact on the whole company.

4. Be a Leader in Software Development and Scalability: For Backcountry.com, technology lends a competitive edge. Our CTO leads the charge in staying ahead of the curve, buying, integrating and building the software that differentiates our business from the competition and enhances the customer experience. The CTO also must ensure that all systems and software can scale to the demands of growth. The ability to focus and never lose sight of the basics is imperative.

5. Be Accountable via Unified Metrics: To run an efficient organization, the CEO and CTO must agree on metrics. Engineering team goals and the coinciding Key Performance Indicators should align with overall company aspirations (e.g., high conversion rate), system efficiency goals (e.g., fast page load) and availability standards (e.g., uptime). A mishmash of metrics indicates a miscommunication somewhere along the line. Lack of agreement could lead to misguided development, a rogue department and an inefficient business.

6. Run a Tight Ship: Despite high demands and the pressure to deliver the best of everything on time, CTOs must run an increasingly efficient engineering organization. Shrewd management of the business-within-the-business will ultimately result in better cash flow and higher earnings before interest, taxes, depreciation and amortization (EBITDA) margins for the company as a whole. There’s no doubt that complementary strengths and personalities make a better CEO-CTO relationship, but when it comes to the nuts and bolts of what works, basic expectations must be met. The CEO relies on the CTO to administer the technical intricacies of a business and to hone those intricacies to the objectives at hand. A superstar CTO will do that, but will also bring experience, knowledge, balance, accountability, management skills and business acumen to the executive roundtable.

There’s much more to a CTO than technology. A CTO is first and foremost a chief officer—a respected advisor and team-mate of the chief executive. The responsibility of the job is immense, and the importance of the relationship is undeniable.

About the author: Jill Layfield is the CEO of Backcountry.com, an online retailer of premium outdoor gear.

Six Leadership Qualities of a World-Class CTOBy Jill Layfield LE

ADER

SHIP

Page 50: Cto magazine volume1 issue3

LEAD

ERSH

IP

April - June 2013 www.ctoforumbd.org46

21st Century ICT Graduates: The Architects of Future BangladeshBy Kanon Kumar RoyDirector General, National Board of Revenue

The famous South African social rights activist and retired Anglican bishop Desmond Tutu said “we should all aim to create a society where people matter more than things”. In fact the same spirit has been well adopted in the policy statement behind “Digital Bangladesh” where the wellbeing of the Citizens of Bangladesh has been the prime focus. In the current information age to bring any leap frog development in economic and social sectors there is no alternative way rather than placing ICTs on the driving seat. Research studies indicate that information and communication technologies are infrastructures whose provision accelerates, most profoundly the economic development of a nation.

Accordingly, Kessides [1993] posited that infrastructure contributes to economic development both by increasing productivity and by providing amenities, which enhance the quality of life. So, the “Digital Bangladesh” is not a concept only rather ultimate and upcoming reality had it really been perceived and taken into belief by the policy makers and the implementers of the Government. However, even to compare with any country truly powered by ICTs Bangladesh needs quite a long way to go. So, the relevant question in this stage is “what we need to do?” Among many other requirements we need sufficient efficient and updated IT workforces. For example, we can look at Singapore- a front liner country for being powered by ICTs.

According to an statistics in 2009 the population of Singapore was around 4.5 million and the IT manpower was 1,30,000 (2.89%). But the Government of Singapore felt the scarcity of IT manpower and was inviting more workforce in this field. So, the question is how much we have and how much additional workforce we need? There is dearth of such statistics regarding

the number or percentage of IT manpower in Bangladesh. But, definitely it can be presumed that it would appear insignificant in terms of requirement.

Recently a vibrant roundtable organized jointly by Dhaka Chamber of Commerce & Industry and CTO Forum Bangladesh clearly indicated the necessity of equipped and competent ICT Graduates to be produced by our Universities. The roundtable, which was in fact a very successful Industry-Academia dialogue and probably was the very first of such kind in Bangladesh also spreads a forecast that there would be more such dialogues to come. The most important essence and outcome of the roundtable was that the Industry feels that the ICT Graduates produced by our Universities are very much lagging behind in meeting up the needs or expectations of the Industries and there are gaps between the existing curricula of the ICT or Computer Science disciplines and the curricula of the same that could meet the requirements of the 21st Century. The curricula, no doubt, play a vital role in producing ICT Graduates of international standard and there is no available evaluation or analysis of existing ICT curricula of our universities, which can clearly tell us

DIGI

TAL

BANG

LADE

SH

Page 51: Cto magazine volume1 issue3

Digi

tal

Bang

lades

h

April - June 2013www.ctoforumbd.org 47

about the extent of that gap. However, we need to sort out and identify the nature of the gaps and the roles of different changes required in the curricula in order to standardize the same and make those capable of producing competent ICT graduates. We may need to change the way in which Computer science and ICT students are educated to meet the needs of the ICT industry in the 21st century.

According to Curriculum development guidelines by Career Space (a consortium of eleven major information and communications technology companies of Europe) ICT graduates need a solid foundation in technical skills from both the engineering and informatics cultures, with a particular emphasis on a broad systems perspective. They need training in team working, with real experience of team projects where several activities are undertaken in parallel. They also need a basic understanding of economics, market and business issues. Career Space also recommends that the ICT graduates of modern age need good personal skills such as problem-solving abilities, awareness of the need for lifelong learning, readiness to understand fully the needs of the customer and awareness of cultural differences when acting in a global environment. According to the recommendations of Career Space ICT curricula should consist of the following core elements: a scientific base of ~30%; a technology base of ~30%; an application base and systems thinking of ~25%;a personal and business skills element of up to ~15%. While we are thinking about revising ICT curricula of our Universities we can carefully review those recommendations to formulate curricula guidelines of our own.

Talking about the expectations of the industries regarding the preparation of ICT graduates by the universities involves lot of opinions and debates. Opinions are there about fixing a considerable length of the practical work exposure of students in industries as part of curriculum, involving industry leaders in the board responsible for formulating curricula and accommodating industries’ need in the form of special practice-oriented courses in the curriculum. Dianne Hagan in his paper reports about employer satisfaction with ICT graduates. An investigation was undertaken into learning outcomes and curriculum development in major disciplines in Information and Communications Technology in Australian universities. The study had

three major aspects: 1) finding out how universities were responding to changing demands, and discovering examples of innovation and good practice in teaching in this area; 2) interviewing graduates to see how they felt about how and what they had been taught; and 3) surveying potential employers about their needs and their satisfaction with employees who had recently graduated from ICT university courses. The study resulted a number of suggestions for improvement. 30% of the respondents opined that universities should provide students with more work experience and in their opinion “Work experience is the best training.” 17% of respondents felt the necessity of industry consultation and thought that there should be more channels for communication between universities and employers, so that industry can play a greater role in course and curriculum design. They passed opinion in favor of more use of industry lecturers, and suggested that university lecturers should stay in touch with what is happening in industry in the sense of current methodologies. 14% indicated the importance of Industry awareness and suggested that students should be made aware of what to expect in industry.14% of the respondents talked about the generic skill of the students and suggested that they should be given a more thorough training in written and oral communication skills, teamwork and problem solving.

In January 2012 aGovernment–Industry ICT Action Plan aimed at building the supply of high-level ICT graduates was published by the Department of Education of Ireland and ICT Skills- a training agency. The Plan outlines a range of short, medium and long term measures to develop a sustainable domestic supply of high quality ICT graduates to support the further expansion and development of ICT sector and support innovation and growth across other sectors of the economy of Ireland. One of the key measures in the Plan was the roll-out of ICT skills conversion programs by higher education providers in partnership with industry. A very positive initial evaluation and strong industry endorsement resulted into a second phase of the conversion programs, which is now being rolled out. In light of the above information and discussion we can well focus on some important aspects of our ICT curricula to be developed for our universities. These are:

Page 52: Cto magazine volume1 issue3

Digi

tal

Bang

lades

h

April - June 2013 www.ctoforumbd.org48

• There should be at least 6 months of practical work experience in ICT industries and this should be mandatorily adopted as part of ICT curriculum in all the universities offering degree in ICTs or Computer

Science.

• Guest lecturing from ICT Industry leaders can create wonderful opportunities for the graduating students to match their knowledge with requirements in practical workplaces.

• Board responsible for developing ICT Curricula could include representative from ICT Industry in order to have a perfect blend of theory and practices in real world in learning by the students in regular classes.

• Exposure to frequent dialogue/ seminar/ workshops between Industry and Academia could play vital roles in developing pragmatic ICT curricula, bringing required changes in those and accommodating new ideas in order to equip our upcoming ICT workforces with proper knowledge and experiences in ever changing technology environment.

• Periodical and regular evaluation of industry feed backs and reviewing ICT curricula on the basis of quickly changing requirements could be very important in order to bring perfection in those.

Keeping pace with the requirement of industry the universities also need to focus on some current and important issues. For example, the development of application-oriented solutions is now very popular and the industries also are paying more attention on

this. Implementation, management and support of ICT systems by the same vendor are getting more importance day by day. Especially in the public sector where the ‘capacity building’ issue becomes debatable and the ‘managed service ‘model is taking its place this approach might flourish in Bangladesh like other countries. ICT selling and consultancy also have become more important as the industries are growing. As a global scenario the majority of graduates increasingly need a combined qualification from both the engineering and informatics culture as well as from other related disciplines such as business and behavioral skills.

However, keeping in mind all the issues discussed above both industry and academia can play important role by being involved more and working jointly in the respected area. Good news is this that awareness and enthusiasm have been created and dialogues have been started. We can be more optimistic that we can do something and have a better generation of ICT graduates ready to face challenges of 21st Century.

Source:

1. Curriculum Development Guidelines New ICT curricula for the 21st century: designing tomorrow’s education, Career Space, Luxembourg: Office for Official Publications of the European Communities 2001 – VI, 47 pp. – 21.0 x 29.7 cm ISBN 92-896-0074-8 - Cat. No: TI-39-01-966-EN-C

2. Employer Satisfaction with ICT Graduates, Dianne Hagan, School of Computer Science and Software Engineering Monash University, PO Box 197, Caulfield East, Victoria 3145, Australia

3. ICTSkills.ie; ICT Skills Conversion Courses

Author Details:

Kanon Kumar RoyDirector General, NBR Fellow Member, CTO Forum BangladeshEmail: [email protected]

Page 53: Cto magazine volume1 issue3

CTO FORUM EVENTS

April - June 2013www.ctoforumbd.org 49

Roundtable on “Online Transaction Security”held at Dhaka Press Club

Kamal Uddin AhmedAddl. Secretary, Ministry of ICT

Shyama Prasad BepariJoint SecretaryMinistry of ICT

Elena NikolichevaThird Secretary, Head of the

Economic Affairs Section, Embassy of the Russian

Federation in Bangladesh

Prabeer Sarkar Chief Executive Officer,

Office Extracts

[Left to Right] Syeeful Islam, Ex. President, DCCI, Shafquat Haider, Director, FBCCI, Nazneen Sultana, Deputy Governor, Bangladesh Bank, Tapan Kanti Sarkar, President, CTO Forum, Kabir Bin Anwar,

Director General (Admin), PM’s Office & Project Director, A2I and Dasgupta Asim Kumar, Executive Director, Bangladesh Bank were present in the roundtable.

Page 54: Cto magazine volume1 issue3

CTO FORUM EVENTS

April - June 2013 www.ctoforumbd.org50

Seminar on “Role of IT in 21st Century Banking” held at BIBM

Dr. Atiur Rahman Governor, Bangladesh

Bank

Dr. Bandana SahaSupernumerary Professor, BIBM

Mohammed Nurul Amin, Chairman, ABB

and MD & CEO, NCC Bank Ltd

Koushik NathVP, Systems

Engineering, India & SAARC, Cisco Systems

Nazneen Sultana Deputy Governor, Bangladesh Bank

Md. Shihab Uddin Khan, Associate Professor, BIBM

Tapan Kanti Sarkar President, CTO Forum

Bangladesh

Dr. Toufic Ahmad Choudhury, Director

General, BIBM

Page 55: Cto magazine volume1 issue3

CTO FORUM EVENTS

April - June 2013www.ctoforumbd.org 51

Page 56: Cto magazine volume1 issue3

CTO FORUM EVENTS

April - June 2013 www.ctoforumbd.org52

3rd Bangladesh CIO Summitheld at Hotel Ruposhi Bangla

Md. Nazrul Islam Khan, Honorable

Secretary, Ministry of ICT

Md. Sabur KhanPresident, DCCI

Dasgupta Asim Kumar, Executive

Director, Bangladesh Bank

Abul Kashem Md. Shirin, DMD, Dutch

Bangla Bank Ltd.

Syed Masodul Bari Secretary General, CTO

Forum Bangladesh

Tapan Kanti Sarkar, President, CTO Forum Bangladesh was addressing in 3rd Bangladesh CIO Summit

Page 57: Cto magazine volume1 issue3

CTO FORUM EVENTS

April - June 2013www.ctoforumbd.org 53

Roundtable on “National Payment Switch & New Opportunities for e-Commerce”

held at BASIS Auditorium

Debdulal Roy, Joint Secretary General, CTO Forum was addressing the roundtable and Anir Chowdhury, Policy Advisor, A2I Programme, A.K.M. Fahim Mashroor, President, BASIS, Md. Nazrul

Islam Khan, Secretary, Ministry of ICT, Tapan Kanti Sarkar, President, CTO Forum and Humayun Kabir, GM, Bangladesh Bank were present [Left to Right].

INFOCOMM-2012held at Hotel ITC Sonar, Kolkata

Page 58: Cto magazine volume1 issue3
Page 59: Cto magazine volume1 issue3
Page 60: Cto magazine volume1 issue3
Page 61: Cto magazine volume1 issue3
Page 62: Cto magazine volume1 issue3
Page 63: Cto magazine volume1 issue3
Page 64: Cto magazine volume1 issue3
Page 65: Cto magazine volume1 issue3
Page 66: Cto magazine volume1 issue3
Page 67: Cto magazine volume1 issue3

CTO MAGAZINE, VOL. 01, ISSUE. 03, APRIL-JUNE 2013