one area distinctive to its vision, priority and thrust ... · by famous cardiologist dr ps reddy,...
TRANSCRIPT
7.3.1
One area distinctive to its vision, priority and thrust
Igniting the spirit of Innovation and
Entrepreneurship 1. Vision , Priority and Thrust of the Institution
Our college is driven by the Vision that it should focus on producing professionally competent and socially sensitive engineers. The institute encourages the students and faculty to be creative thinkers, to be innovative and for registering the patents. They are further encouraged to create new products which needs conceptual, innovative and realization of new products.
2. Three Special Projects associated by the students of the institution Some information on these three projects is given hereunder :
a. Artificial Heart Project in association with Dr.P.S Reddy, Professor of Cardiology, University of Pittsburg, USA and three engineering colleges namely , Chaitanya Bharthi Institute of Technology, Hyderabad, BITS Philani, Hyderabad and Sreenidhi Institute of Science and Technology.
b. Sreenidhi Satellite Project solely by our college c. Autonomous Car Project solely by our college
The progress reports of the above three projects are uploaded to NAAC portal as additional information.
3. Initiatives taken by SNIST to ignite spirit of Innovation
Right from 2006, the college has been conducting innovative ideas and solutions competitions which attracted the attention of various engineering colleges in the country and they deputed their students to participate in this event year after year. Thus, the college has been making pioneering efforts for the cause of igniting the spirit of innovation in engineering students.
a. Establishment of Sreenidhi Hub and facilities provided
Sreenidhi Hub is established as the Innovation and Incubation centre by Sreenidhi Institute of Science and Technology to nurture the creative spirit and enhance the risk taking capabilities and also in the process the leadership qualities in our students.
The area dedicated to Sreenidhi-Hub is around 5000 Sq.ft which comprises of 10 cubicles as office space for business incubates, Tinkering, Sandbox lab Space, Technology Room, Director and Manager meeting room and training rooms for students.
Advisory committee to the Sreenidhi Hub is formed comprising of Eminent Engineers from industry, Scientists from research lab, Senior Bank Manager, Dean and Director from the host institute. The advisory committee is approved by Board of Governors of the institute.
With the initiatives mentioned above Sreenidhi-Hub is paving a path to create successful entrepreneur with adequate support at the early stage of businesses to bring immense value, and who provide constant access to high quality mentoring, vast networks and inputs on strategy as well as execution of business ideas to our students.
b. SNIST - MHRD Institute Innovation Council
SNIST has an active Institute Innovation Council under the Ministry of MHRD’s Innovation Cell where in students and faculty participate and organize events such as Leadership Talks Series, Ideation Sessions, Prototype Competition and Smart India Hackathon. Under this
initiative of MHRD, the institute has been awarded four star rating (against the maximum of 5 star rating) for its achievements and active participation for the academic year 2018-19.
c. SNIST as J- Lab under J-HUB, JNTUH To promote the spirit of innovation among student using emerging technology the institute has been organizing technical hackathons at regional and national and Innovation Competitions every semester within the campus where students identify problem statements or work on problem statements presented by MSMEs and reputed IT organizations.
d. Strategic Alliances Forged for promoting Innovation
Strategic Alliance to promote innovation activities and the institute is tied up with the following
(i) The Indus Entrepreneurs (TIE), (ii) Telangana-Hub, (iii) Institute Innovation Council under MHRD, (iv) Jawaharlal Nehru Technological University Hub
(J-Hub). e. Achievements and Accolades
i. Hackathons Awards
SNIST students topped the list with total six awards
- three student innovators and three idea prototype awards. The ideas were evaluated by a jury of industry experts from Hyderabad Software Enterprises Association (HYSEA) as supporting partner to JHUB.
ii. J-HUB Hackathon
Topped in the number of awards received – 06
awards Student innovators award – 03 awards Idea Prototype awards - 03 awards
iii. International Innovation Awards
International Innovation Fair 2017, Vizag – Five
Silver Medal Award under different categories International Invention and Innovation (INTARG)
Poland, June, 2019 - Two Gold, Two Silver, One Bronze
Bangkok International Intellectual Property, Invention, Innovation & Technology Exposition on the occasion of “Thailand Inventors Day” Organized by
National Research Council of Thailand – 2 Silver and 2 Gold medals
Indonesian Invention and Innovation Promotion, February, 2019 – Special Innovation Award.
International Innovation Fair, Hyderabad, India 2019 – One Gold, Three Silver, Two Bronze.
iv. National Innovation Awards
The Indus Entrepreneurs (TiE), Hyderabad – Award of
Rs. 1.0 Lakh for Best Innovation Idea , i.e Optimus : Smart Pill Companion
Rural Innovators Start Up Conclave 2019 Organized by
National Institute of Rural Development & Panchayat Raj, Ministry of Rural Development Government of India.
i. Team Aquart of SNIST won first prize and was awarded Rs
10,000/- for their product Smart Tap Indicator.
ii. Team Pixhawk of SNIST won Best Innovator Award for the
Idea of Precision agriculture using Drone
MSME Student Grant 2019 Mumbai 28-29th Oct 2019, CMO, Mumbai - Special Innovation Achievement award
AICTE Chhatra Vishwakarma awards 2019 - Five
Teams were nominated out of which 2 teams selected in the regional round for the phase – II presentation, among 250 student teams.
AICTE Proof of Concept Competition 2019 - two
prototypes from SNIST were selected at national level for Bootcamp and prototype Exhibition among thousands of entries received throughout INDA.
4. Efforts made for promotion of Entrepreneurship in students
i. Telangana Academy of Skills and Knowledge
(TASK) in alliance with Indian School of Business (ISB)
They have devised Technology Entrepreneurship Program (TEP) to offer to engineering college students. Our college is recognized as the best participating institution as large number of students have taken part in this program and have done extremely well. AS our college is Autonomous, it has integrated as elective subjects which are offered by ISB for TEP. Our student team “Farmco” was appreciated by ISB as one of the two teams in the TEP for completion of all the phases of the project. ii. Wadhwani Foundation It has been making enormous efforts for promotion of entrepreneurship, our college is one of the two institutions chosen from 470 institutions under JNTUH University. The curriculum as proposed by them is adopted by our college and is being offered as one of the open elective series. Our faculty were trained by them and they have taken responsibility of teaching the concerned subjects.
iii. Strategic Alliances Forged for promoting
Entrepreneurship
Strategic Alliance to promote entrepreneurship activities, the institute is tied up with
(i) Wadhwani Foundation USA,
(ii) ISB, (iii) The Indus Entrepreneurs (TIE), (iv) CITD,
iv. Awards and Accolades
a. Development of Drone The proto type of Drone was appreciated by Prof. Jaya Shanker, Agricultural University, Hyderabad, when it was demonstrated for detecting the pests in a crop and the automatically selected pesticides was sprayed on the crop. Its present value is about Rs. 5.0 Lakhs and some agriculturists who are having big land holdings are prepared to buy.
The alternative way in which agriculturists who have small holdings can use it as a service provided by the owner of the Drone on the basis of certain amount of money to be paid acre wise. This Drone when it was demonstrated at various national competitions, and the prize money is running into Lakhs of Rupees. College is going to shortly register as a company with him as per company regulations and we shall start producing for sale which we hope will be very popular in our State where 60% of youth are working in Agricultural related activities.
b. Sieger Technologies, a startup company is situated in Cherllapally Industrial Area. They are manufactures of Lithium Ion Batteries for electric vehicles. Their present turnover is Rs 5.0 crores. The following are the details of our Alumni who are mostly from the batch 2016 & 2017 pass outs. 1. Guru Prashanth Reddy 2. Gorre Varun Reddy 3. Sreekanth Reddy P 4. Galla Bharath Kumar 5. Revanth Metla 6. Anjan Pradad . D
7. Sahaj Kuldeep Mallavarapu 8. Naveen Kumar Palla 9. Shiva Kiran k 10. Pavan Kumar Merugu
c. TCT Holidays and Technologies Private Limited is a startup company belonging to Mr. Nagaraju Kandukuri, with 16 employees working, which is situated at Mahesh nagar , Kapra , Hyderabad. The company has been developing software for travel and holidays to the customers. The website of the company is in the name of Tralamo.com. Mr. Nagaraju Kandukuri is one our distinguished alumni passed out in 2013.
d. Mr. Nagaraju Kandukuri also started one more
software development startup company in the name of Tiera Techno Private technologies with 42 employees situated at Madhapur, Hyderabad and head quarters in Malaysia.
5. Incentive and Support given by SNIST to Faculty and
Students for submission of applications for IPR (Patents)
The Institution has promoted patenting activity by giving necessary supports such as i. Scrutiny of the patent application by a Patent Attorney ii. Submission of application to Indian Patent Office along
with necessary application fee that covers design patents and provisional registration,
iii. Submission of complete specification of the patent along with necessary publication fee and examination fee.
iv. After publication the Patent office invites for any objection from anybody,
v. Examination of objections received if any and personal interview if required for which the fee is already paid at the stage – 3
vi. Granting of patent and permanent registration.
All the expenses involved are paid by the college and College is very happy that 37 applications were submitted to the Patent office. The following information gives as to how many applications are granted and others are waiting at various stages.
Sl.No Patent information No
1 No of patents applications submitted by students
18
2 No of patents registered in the names of the students
3
3 No of patents applications submitted by faculty
19
4 No of patents registered in the names of the faculty
3
Total no of applications submitted by the college as a whole = 37 Total no of patents registered already = 6
Amount of money spent by the college - Rs 2.5 Lakhs
6. Concluding Remarks
Our management is progressive in nature and interested not only in giving effective instruction but also encouraging the students in innovation and entrepreneurship. The efforts made are fructified due to enthusiasm shown by the students to take part in these activities and our faculty is always ready to guide the students. The college has a whole is very happy in this regard for the results achieved.
INDO – AMERICAN ARTIFICIAL HEART
PROGRAM (IAAHP]
Building indigenous Artificial Heart
Design & Development of Low Cost 3rd Gen Extracorporeal Left Ventricular Assist Device
(LVAD)
Website of IAAHP organization
http://iaahp.in
Need for developing an Artificial Heart
Cardiovascular diseases are the leading cause of mortality globally. This has
become a trend in India as well. In 2016, 28.1%i of deaths in India were due to
cardiovascular diseases. This is larger than all communicable, maternal,
neonatal & nutritional diseases combined. This continues to be a growing trend
in India. Most cases of advanced heart failures result in death as the healthcare
solutions are unavailable. There are over 4 million heart failure cases in India at
any given point of time but barely 100 heart transplants are conducted every
year.
The main problem is poor cadaver donation problem for matching hearts for
transplantation. An inexpensive LVAD that extends patient’s life will make it
easier to match hearts to patients.
Summary
The dream of developing an indigenous artificial heart in India famed and led
by famous cardiologist Dr PS Reddy, Founder of MediCiti Hospitals. He
initiated and made a consortium to work towards this goal. The consortium
includes Dr A.G.K. Gokhale, lead cardiovascular surgeon and advisor to the
team. It consists of Co-Investigators from BITS Pilani, Hyderabad campus,
Sree Nidhi Institute of Science and Technology, Chaitanya Bharathi Institute
of Technology & Laxven Pvt Ltd initially and many more joined. A team of
engineers and clinicians from University of Pittsburgh pioneers in the field of
artificial heart are mentoring this project.
The artificial heart program, with its focus on value-engineering, endeavours to
tread the fine line between applied research & commercial viability.
Unique features of the proposal
- Indigenous development in consortium approach with academic
institutions, industry, medical institute and hospital
- International advisory committee to guide the project on right track with
their experience
- Industry readiness to take up manufacturing and marketing globally
The task force contains teams from institutes as well as private companies and
will have a strong focus on product development. The program is structured as a
phase-gate process with biochemical impact study of blood using LVAD in a
hemodynamic circuit as the gate for phase 1.
The end goal is a commercially viable device.
The aim of this project is to build a commercially viable, indigenous,
extracorporeal Left Ventricle Assisted Device (LVAD). It is a mechanical
device that pumps blood from the left ventricle to the aorta and reduces the
burden on the heart muscles. This is critical in advanced heart failure and is
used as a “bridge to transplantation”. It extends life of patients with weak
hearts who are waiting for the right donor. It can also be used as “destination
therapy” where the device is implanted in patients whose heart muscles need to
support in pumping blood.
The procedure to implant an LVAD that is imported from USA or Germany
costs between Rs. 90 Lacs & Rs. 1.25 Crs. The aim of the consortium is to build
an affordable LVAD at one-tenth this price which would be of great help
Indians patients in particular and globally in general. It will provide an
additional avenue of treatment for advanced cardiovascular diseases that is
currently unaffordable to most patients in India.
Short term goals / steps of the Consortium:
The first phase consists of design and development of device for
hemodynamic system.
- Development of prototype which functionally resemble heart motor
parameters
- Assessment and function of Prototype in terms of flow rates of blood
- Assessment of effects on blood cells integrity after passing through
the protype.
The second phase is animal trials and
The third phase will be clinical trials in humans.
The consortium initiated the first phase of work that is development of quality
motor and prototype development.
In lieu of the criticality and complexity of the device to be developed, three
variants of pumps, viz., Axial pump with hydrodynamic bearing, Centrifugal
pump with hydrodynamic bearing and Centrifugal pump with magnetic
levitation are being proposed to be developed. After testing in hemodynamic
circuit with simulated blood & actual blood, the successful pump will be chosen
for further studies and animal trials.
Details of proposal for building an LVAD
Scope
The scope of the proposed research project is design & development of
extracorporeal LVAD. The LVAD is of third generation and the team will
explore three variants:
i) Axial pump with non-contact hydrodynamic bearing
ii) Centrifugal pump with non-contact hydrodynamic bearing
iii) Centrifugal pump with magnetic levitation
The task force intends to develop three variants of Continuous Flow LVADs as
part of phase 1 of the program. The models will be tested using Computer
Aided Flow Simulations and experimental simulations in a hemodynamic
system. All feasible designs will be fitted into large mammals to identify the
best performing device. This will mark the end of phase 1 of the program. The
best performing device will be taken further through large mammalian animal
trials.
Objectives
i) To design the configuration conceptually and finalize the specification
ii) To design impeller to have no shear flow with flow and pressure
specifications
iii) To develop hydrodynamic bearing technology
iv) To develop magnetic levitation technology
v) To finalize the design for three variants: a) Axial pump with
hydrodynamic bearings, b) Centrifugal pump with hydrodynamic
bearing & c) Centrifugal pump with magnetic levitation
vi) To carry out Computation Flow Dynamics (CFD) analysis to analyse
the flow and vulnerability studies for Thrombosis
vii) To establish flow visualization set-up
viii) To design and establish hemodynamic circuit to test with a) simulated
blood & b) actual blood
ix) To design controller for pump flow control & levitation
x) To realize 3 types of devices with control systems
xi) To subject the successful device to animal testing
Following table details the sub-system development and co-investigator for the
program:
Sno Subsystem Co-
Investigator Institute
1
LVAD with axial
pump& hydrodynamic
bearings
Dr.
Purushotham
Dr. Anil
Kumar
Dr.
Saikrishna
Sree Nidhi Institute
of Science &
Technology
2
LVAD with
centrifugal pump &
brushless motor
Mr. Ramesh
Reddy Laxven Pvt Ltd
4
LVAD with
centrifugal pump &
hydrodynamic
bearings
Prof. Ravinder
Reddy
Chaitanya Bharathi
Institute of
Technology
5
Hemodynamic circuit
for experimental
testing of LVADs
using fluid with blood
like properties
Prof.Srinivasa
Regalla
BITS, Pilani –
Hyderabad Campus
6
Biochemical &
physical impact of
LVAD on blood
Prof. Suman
Kapur
BITS, Pilani –
Hyderabad Campus
7 Infection study of site Prof. Siva Sai
Sree Nidhi Institute
of Science &
Technology
8 Animal Testing
Veterinary College,
Agricultural
University,
Hyderabad
Specifications
The broad specifications of the extracorporeal LVAD for pump design are given
below:
SNo Parameter Specification
1 Flow rate 3 – 8 Lt/min (optimum at 5 Lt/min)
2 Pressure Max 140 mm of Hg
3 Piping 3/8”
4 Diameter Max 85 mm
5 Length Max 95 mm
6 Weight Max 800 gm
7 Life Min 5 years
8 Hemo
compatibility
Free from Thrombosis formation
hemolysis & vWF damage in
stipulated life of 5 years
Pump selection
All three pumps under development will be initially tested in hemodynamic
circuit and flow visualization after CFD studies. There should not be any shear
or vortices in the flow. Haemolytic studies will be carried out with the pump
that contains no shear stress or vortices.
Baseline configuration
The initial baseline configurations will be worked for all the above three. The
following steps will be follerd in design and development:
i) Initial sizing and generation of drawings
ii) CAD design of impeller
iii) CFD analysis of the impeller for understanding the flow pattern and
life forces on the impeller
iv) Modifying the impeller to get the required flow characteristics and
smooth flow without sheer stress and vortices
v) Materials identification & specifications
vi) Manufacturing the pump components by 3D printing with hemo
compatible poly carbonate materials
Development of pump
Development of centrifugal /axial pump with hydrodynamic bearing:
i) Understanding the technology from literature
ii) Integration of permanent magnets into impeller & study the dynamics
iii) Manufacture or customize magnets (tentatively, neodymium material)
iv) Design of motor stator assembly, windings & integrated casing design
v) Design of suitable hydrodynamic bearing and study its suitability
under varying operating conditions
vi) CFD study for optimization & finalization of pump
vii) Integration of impeller, casing, housing, stator and bearing
Development of centrifugal pump with magnetic levitation:
i) Design of impeller & casing
ii) CFD analysis for optimization & finalization of pump design
iii) Modification of impeller design based on CFD results
iv) Design & configuring levitation system based on pulse with
modulation including finalization of number of coils for levitation &
drive
v) Deciding control methodology, passive/active for axial and radial gap
control
vi) Fabrication of mechanical components like bobbins, motor
laminations, magnes, tools & fixture
vii) Assembly of pump & motor with levitation control
Additional, controller system to vary the flow, to log the data, give alerts as per
requirement will be designed.
Testing the pump
Building the hemodynamic circuit for testing pump with simulated and actual
blood:
i) Design of circuit
ii) Fabrication of connectors & assembly of circuit with pressure, flow &
temperature sensors
iii) Procure / design & build data acquisition system to monitor pressure,
flow & temperature in real time
iv) Simulate the flow & predict flow stagnation and vortex points
v) Conduct the experiment with used pump and validate the circuit
vi) Conduct the experiment with the above three types of pumps as and
when ready
vii) Generate flow vs speed characteristics
viii) Conduct experiment to visualize the flow for any shear, vortices,
stagnation points etc.
Testing in hemodynamic circuit with flow of blood:
i) Identification of suitable methods for free plasma Hb estimation
ii) Measuring aggregation of pellets
iii) Measuring of plasma cytokine levels
iv) Measuring immune cell activation markers
v) Estimating vWF unfolding / disintegration
vi) Infection study at point of contact of percutaneous lead with skin
Project cost: The total cost of the project is expected to be ~Rs. 3.5 Crores. The
duration of the project is 3 years.
Department of Biotechnology, SNIST Work Presented in the Artificial
Heart Meet
Assessment of RBC & WBC fragility, Inflammation and Infections
Project objectives / Conceptual flow CHART
Evaluation of fragility/shelf live of RBC & WBC upon Mechanical shear/stress
Assessment of Factors contributing to Inflammation post Mechanical shear
Identification of predominant skin infections at the insertion sites
Identification of Intervention strategies to prevent infections
PO – 1: RBC & WBC Fragility
Function of RBC and EC interaction creates an adaptable life-support system.
• To standardize different techniques associated with assessment of RBC &
WBC fragility
• To delineate the mechanism associated with fragility of RBC & WBC upon
exposure to mechanical shear / stress.
• Identification of strategies to minimize the effect of mechanical shear on RBC
& WBC
Markers for RBC Fragility
• Loss of cell surface area and morphology alteration
• Shedding of Hb content into plasma
• Level of cell surface Phosphatidyl serine (PS)
• Levels of cell membrane Stomatin
• Fluid Lipid Layer
• Semi flexible filaments (Spectrin)
• Deficiency of anti-oxidative enzymes
AXIAL FLOW BLOOD HEART PUMP (AFBP)
SREENIDHI INSTITUTE OF SCIENCE AND TECHNOLOGY,
HYDERABAD
(16.06.2018)
DR A PURUSHOTHAM
TASKS FOR THE SNIST GROUP
1.DESIGN AND CFD STUDIES OF IMPLANTABLE AXIAL FLOW BLOOD PUMP
2. DESIGN AND IMPLIMENTATION OF MAGNETIC LEVITATION
3.CONTROL SYSTEM DESIGN FOR PULSATILE FLOW
4. MANUFACTURE OF AFBP
5. MANUFACTURE OF CENTRIFUGAL BLOOD PUMP
6.SELECTION AND APPLICATION OF BIO COMPATABLE COATING MATERIAL
7. HEMOLYSIS & THROMBOSIS STUDIES ON BLOOD FLOW THROGH CFBP & AFBP
T.No Task Description Time Line Facilities Capabilities of
faculty
1 Design & CFD Studies of Implantable Axial
flow Blood Pump
Dec 2018 CATIA,
ANSYS,
Proe, Matlab
Dr. A.Purushotham
( Mechanical Design,
MATLAB)
Dr. Saikrishna (CFDs)
Mr. B.Sirish( CATIA,SOLID
WORKS)
Mr. HameerSingh(
GA,MATLAB,SIMULINK)
2 Design & Implementation of Magnetic
levitation
Dec 2018 ANSYS
ECAD
Mr. Venugopal
Rao , Dr Anil
3 Control System Design Oct 2018 Simulink Dr. Anil
4 Manufacture of AFBP(proto type) April 2019 3D Printer Mr. Ashok
(3D Printing)
5 Manufacture of Centrifugal Blood pump Oct 2018
6 Selection and Applications of Bio
compatible coating material &coating
methods
Oct 2018 Dr. Saicharan
and Ms Sarada
7 Hemolysis & Thrombosis (Infection)
studies
April 2018 Dr. Siva Sai
INTRODUCTION :LVAD SYSTEM COMPONENTS
1.Pump & Cannulus
2.Energy Source
3.Control and
monitoring console
Implantable reduces
risk of infection
SUB TASKS OF TASK-1DESIGN & CFD STUDIES OF IMPLANTABLE AXIAL FLOW BLOOD PUMP
1.1 Design of pump rotor impeller of AFBP that can be satisfy the
hydraulic and clinical requirements
1.2 CFD based studies on the hydraulic performance of AFBP for continuous
blood flow.
1.3 Design optimization of impeller the response of pressure rise, flow
rate ,efficiency, Shear stress for the range of rotating speed of AFBP.
1.4 CFD based studies on the hydraulic performance of AFBP for pulsatile
blood flow.
SUB TASKS OF TASK-2DESIGN & IMPLEMENTATION OF MAGNETIC LEVITATION
• 2.1 Design of magnetic levitation for air suspension of rotor of AFBP.
• 2.2 Magnetic field Analysis Axial permanent magnetic unit using ANSYS
• 2.3 Analysis of radial active magnetic unit using ANSYS.
• 2.4 Proto type build of magnetic unit.
PROGRESS OF TASK-1.1(DETAILS)
Design of pump rotor impeller of AFBP
(i) Hydraulic Requirement :
Pump should able to generate 60 to 120mm Hg pressure rise with volume flow rate 6-12L/min
Pump should have efficiency above 70% it helps in minimizing the size
Speed range : 5000 -14000rpm
Power rating 3 -10Watts
(i) Clinical requirements:
---Hemolysis requirement
Shear stress be always less than 400Pa
Shear rate: less than 4mPa/sec (Damage of RBC due to rupturing at shear rate )
---- Thrombosis requirement
minimum of Recirculation regions, cavitations zones, surfaces with sharp edges , flow stagnation,
roughness of surfaces
Design Requirements
Studies made by Dr.
A.Purushotham
PROGRESS OF TASK-1.1(DETAILS)
Design of pump rotor impeller
3.Size Requirements:
Hub diameter of impeller: ≤ 18mm
Blade Tip diameter of impeller: ≤20mm
Clearance between inner wall and blade top: 1-3mm
Number of blades : 3 to 6.
Blade thickness: 0.5mm
Blade shapes: DCA,NACA65, C-Series
Design Requirements
Studies made by Dr.
A.Purushotham
PROGRESS OF TASK-1.1(DETAILS)
Preliminary Design of pump rotor impeller with Analytical Expressions
(i)
(ii)
(iii)
Studies made by Dr.
A.Purushotham & B.Sirish
PROGRESS OF TASK-1.1(DETAILS)
Preliminary Design of pump rotor impeller of AFBP with Analytical Expressions
T=Iα
(IV)
(V)
(VI)
(VII)
Studies made by
Dr. A.Purushotham & B.Sirish
VII
VIII
Studies made by
Dr. A.Purushotham & B.Sirish
Progress of Task-1.1
PROGRESS OF TASK-1.1(PRELIMINARY DESIGN DETAILS OF IMPELLER )
Hub diameter of impeller: 18mm
Blade Tip diameter of impeller: 20mm
Clearance between inner wall and blade top: 1mm
Number of blades : 3
Blade thickness: 0.5mm
Speed : 8000r.p.m.
Efficiency :70% ,Viscosity: 3 x10-3
Density of blood:1060kg/m3
Blade Angles: Blade inlet angle :12.510
Exit Angle:17.060 ,
Hydraulic power:1.695Watts
Inertial power :1.052Watt
Viscous power:0.861W
Total power :3.608
Studies made by
Dr. A.Purushotham &
B.Sirish
PROGRESS OF TASK-1.2(DETAILS)
CFD based studies on the hydraulic performance
for continuous blood flow
Studies made by
Dr. K.Sai Krishna
Hub Diameter – 14.8 mm
Blade Height – 2.6 mm
Import Vane Angle - 12 ͦ0
Export Vane Angle – 750
Enclosing Pipe Diameter – 22 mm
Impeller Length – 72 mm
OPERATING CONDITIONS:
• Blood is treated as a non-Newtonian fluid. The viscosity of blood is captured as a
function of the local temperature using a power law model
• Blood flow is modeled to be turbulent using the Realizable k-ε model
• Blood flow rate is held constant in each simulation
• Flow rate over a range of 3 l/min to 9 l/min is being examined
• Exit gauge pressure of the pump is taken to be 100 mm-Hg
• Density of blood is taken to be constant at 1060 kg/m3
• This leads to mass flow rate range from 0.053 kg/s to 0.159 kg/s
• The impeller rotation is varied from 5000 rpm to 10000 rpm
Studies made by
Dr. K.Sai Krishna
Progress of Task-1.2
CELL-ZONE CONDITIONS:
• The flow domain is divided into stationary and rotating domains in order to model
impeller rotation at optimized calculation times
• An unstructured hybrid mesh of around 200,000 cells is used to model blood flow
• Y+ values obtained are < 10
• Convergence criterion is set to 10-3
Studies made by
Dr. K.Sai Krishna
Progress of Task-1.2
Studies made by Dr.
K.SaikrishnaCFD Studies on preliminary impeller
Progress of Task-1.2
Studies made by
Dr. K.SaikrishnaCFD STUDIES ON IMPELLER WITH STRAIGHTNER & DIFFUSERS
Progress of Task-1.2
OTHER IMPELLER MODELS DEVELOPED FOR CFD STUDIES
Progress of Task-1.2
Studies made by
Mr. B.Sirish
Design optimization of impeller
Progress of Task-1.3
Depending up on severity of heart failure,
flow requirements vary 6-15L/min and
pressure rise 60 -120mmHg.
This implies the device should able to operate
at different design points rather than the
specific design point
Studies made by
Mr. Hameer Singh
Progress of Task-1.3
Objective of optimization Study: To determine the speed and size of the pump at which the pump can
generate physiological flow rate , pressure rise and shear stress.
Method : 1. To identify feasible input parameters (Such as Speed & Size of rotor ) and output parameters (
Pressure difference, Shear stress , flow rate, Efficiency) :
X ̴ IP(N,S) : Input function with Speed and size as variables.
Y ̴ OP(H,Q,τ, η) :Out put function with Pressure rise, Shear stress, flow rate ,efficiency as variables.
2. Set Range N :3000 to 20000 RPM, D: 20-30mm,H:20mmHg to 120mmHg , Q: 6-15L/min , τ :100 -400Pa, η : 50-75%.
3.To determine Trade ( feasible operating )of between X ̴ Y, Apply MOGA (Multi Objective Genetic Algorithm)
to obtain following trade of charts
(i) Flow Rate Q ̴ N
(ii) Pressure Rise H ̴ N
(iii) Shear stress τ ̴ N
(iv) Efficiency η ̴ N
The tradeoff amongst the parameters will give the numbers of feasible candidate that can satisfy requirements
4. Apply Decision Support Process (DSP) i to find out the best possible solution form the generated feasible solution.
Studies made by
Mr. Hameer Singh
CONTROL SCHEME
21
Block diagram for BLDC Control with hall sensor feedback
Task 3: Control System Design
Progress of Task-3.0
Studies made by
Dr. P. Anil kumar
PRINCIPLE OF OPERATION• WHEN CURRENT OF PHASE B IS ‘ZERO CROSSING’, THEN IA AND IC ARE EQUAL AND OPPOSITE.
SIMILARLY, EAN AND ECN ARE ALSO EQUAL AND OPPOSITE WHEN EB IS ZERO.
• VABBC = VAB − VBC = EAN − 2EBN + ECN = −2EBN
• THE OPERATION VAB −VBC (VABBC) ENABLES DETECTION OF THE ZERO CROSSING OF THE PHASE B
EMF.
• SIMILARLY, VBCCA ENABLES THE DETECTION OF ZERO CROSSING OF PHASE C BACK EMF. AND
VCAAB WAVEFORM GIVES THE ZERO CROSSING OF PHASE A BACK EMF.
THEREFORE, THE ZERO-CROSSING INSTANTS OF THE BACK EMF WAVEFORMS MAY BE ESTIMATED
INDIRECTLY FROM MEASUREMENTS OF ONLY THE THREE TERMINAL VOLTAGES OF THE MOTOR.
1/23/2020 22
22
Ref: P. Damodharan, R. Sandeep and K. Vasudevan on ‘’ Simple position sensorless starting method for brushless DC motor’’ IET Electr. Power Appl., Vol. 2, No. 1, January 2008
Comparison of hall sensor output changing in BLDC motor with back EMF in sensorless BLDC motor
Studies made by
Dr. P. Anil kumar
Progress of Task-3.0
Studies made by
Dr. P. Anil kumar
Progress of Task-3.0
DIFFERENT CONTROL TECHNIQUES IMPLEMENTED– TO ACHIEVE PULSATILITY IN SPEEDS OF BLDC MOTOR WITHOUT HALL SENSORS
• SIMPLE PI BASED CONTROL
• SIMPLE PID BASED CONTROL
• FUZZY LOGIC CONTROL ALONE
• FUZZY PI BASED CONTROL
• FUZZY PID BASED CONTROL
24
Studies made by
Dr. P. Anil kumar
Progress of Task-3.0
MOTOR PARAMETERS
1/23/2020 25
1/23/2020 2525
Parameters Values
Stator phase resistance 0.7Ω
Stator phase inductance 2.72x 10-3 H
Back EMF flat area 120deg
Inertia 2.12e-7 (kg.m^2)
Viscous damping 1e-7 (N.m.s)
Mechanical Time Constant 0.00262
Torque constant 0.05 (N.m/A_peak)
Voltage 24V
Voltage constant 5.236 V/krpm
Studies made by
Dr. P. Anil kumar
Progress of Task-3.0
PI BASED IMPLEMENTATION OF SENSOR LESS OPERATION IN MATLAB/SIMULINK
1/23/2020 26
1/23/2020 26
Speed output with PI
1/23/2020 261/23/2020 26
PID based Implementation of sensor less operation in Matlab/Simulink
1/23/2020 261/23/2020 26
Kp = 0.8, Ki =0.01, Kd =0.025
Speed output with PID
1/23/2020 261/23/2020 26
Simulink model of sensor less operation with fuzzy logic controller
1/23/2020 261/23/2020 26
Fuzzy rule base :
Error(e)/change in
error(∆e)
Nb Nm Z Pm Pb
Nb Nb Nb Nb Nm Z
Nm Nb Nm Pb Z Nm
Z Z Pb Z Pm Pb
Pm Pm Z Pm Nb Nm
Pb Z Pb Pb Pb Pm
1/23/2020 261/23/2020 26
Input and output variables are linguistic variables, which describes the fuzzy sets. Where Nb is neagative big, Nm is
negative minimum, Z is zero, Pm is positive minimum, Pb is positive big.
Speed output with only fuzzy logic controller
1/23/2020 261/23/2020 26
Implementation of sensorless control of BLDC motor with
PI & Fuzzy controller in Matlab/Simulink
1/23/2020 261/23/2020 26
Speed output with Fuzzy-PI
1/23/2020 261/23/2020 261/23/2020 26
Implementation of sensorless control of BLDC motor with
PID & Fuzzy controller in Matlab/Simulink
1/23/2020 261/23/2020 261/23/2020 26
Speed output with Fuzzy-PID
1/23/2020 261/23/2020 261/23/2020 26
Results Comparison
(Sensor less control of BLDC motor)
1/23/2020 261/23/2020 2626
Fig a: Pulsatile speed taken as reference [16]; Fig b: Speed output with only PI controller
Fig c: Speed output with only PID controller ; Fig d: Speed output with only fuzzy controller
Fig e: Speed output with fuzzy-PI controller ; Fig f : Speed output with fuzzy-PID controller
Studies made by
Dr. P. Anil kumar
Progress of Task-3.0
Remarks
• Pulsatile flow of the blood in mechanical circulatory devices (MCDs) can be achieved by
pulsating the speed of the motor. Owinthedimensional, weight and simplistic control, BLDC
motor offers a practicable solution. Control of BLDC motor’s speed can be achieved by
different methods.
• An investigation of different control techniques lead us to the conclusion that hybrid of PID
with fuzzy based control resulted in smaller ripples and smaller percentage of peak
overshoot in motor speed than simple PI, PID and fuzzy, fuzzy PI.
• Ripples in speed of motor while realizing pulsatility in speed, may lead to turbulence in blood
flow, the effects of which is subject of further research.
1/23/2020 271/23/2020 2727
Studies made by
Dr. P. Anil kumar
Progress of Task-3.0
MANUFACCTURE TWO MANUFACTURERS ARE IDENTIFIED
1. M/S KARTIK MOLDS AND DIES, HYDERABAD
MD-SHRI NAGENDRA CHOWDARI
2. M/S VASANTA TOOL CRAFT, HYDERABAD
MD-SHRI DAYANANDA REDDY
PUMP AND IMPELLER WILL BE INJECTION MOLDED WITH POLYCARBONATE
DIE MATERIAL IS STAVAX STEEL
INITIAL FEW NUMBERS WILL BE MADE WITH SOFT TOOLING
• PUMP IS MOLDED IN TWO HALVES. IMPELLER IS MOLDED SEPARATELY. AFTER INSERTING THE
IMPELLER BOTH WILL BE JOINED BY A SUITABLE ADHESIVE
• IF REQUIRED SURFACE TO BE COATED INSIDE THE PUMP AND ON IMPELLER
Progress of Tasks- 4.0 & 5.0
Efforts made b
Dr. A.Subhananda Rao
• SURFACE ENGINEERING OF VAD’S
• INTERACTION BETWEEN BLOOD AND SURFACE OF THE BIOMATERIALS CAUSE
COMPLICATIONS SUCH AS THROMBOEMBOLISM, BLEEDING AND INFECTION
• IMPROVEMENT IN THE ANTI COAGULANT BEHAVIOR CAN BE ACHIEVED WITH
ACTIVE OR PASSIVE COATINGS
• ACTIVE COATINGS DIRECTLY INTERACT WITH BLOOD AND INTERFERE WITH THE
PROCESS OF INITIAL PROLIFERATION
• PASSIVE COATINGS ACT AS BARRIERS BETWEEN BULK MATERIAL AND BLOOD
Progress of Task-6.0
Studies made by
Dr. Sai Charan6.Design of Bio compatible surface material
COATING APPLIED TO DIFFERENT VAD’Svad Type coating Remarks
EVA HEART
(sun medical)
Second Generation MPC POLYMER
(2-Methacrylo –
loxiethyl phospo
rylcholine)
Titanium alloy
Limited life due to biodegradability
HEART MATE-2
(Thoratic)
Second Generation Textured smooth surface
combination
INCOR
(Berlin Heart)
Axial flow
Magnetically
suspended
HEPARIN Polyurethane pump
Effective for several months
costly
Ventri Asst
LVAD
(Ventracor)
Centrifugal
magnetically
suspended
Diamond like carbon
(DLC)
Titanium casing
One limitation is micro cracks
Modified with elastic features
promising
Progress of Task-6.0
Studies made by
Dr. Sai Charan
OBJECTIVES
Evaluation of fragility/shelf live of RBC & WBC upon Mechanical shear/stress
Assessment of Factors contributing to Inflammation post Mechanical shear
Identification of predominant skin infections at the insertion sites
Identification of Intervention strategies to prevent infections
Progress of Task-7.0
Hemolysis & Thrombosis (Infection) studies
Studies made by
Dr. Siva Sai
RBC & WBC FRAGILITY
• To standardize different techniques associated with assessment of RBC & WBC fragility
• To delineate the mechanism associated with fragility of RBC & WBC upon exposure to
mechanical shear / stress.
• Identification of strategies to minimize the effect of mechanical shear on RBC & WBC
Function of RBC and EC interaction creates an adaptable life-support system.
Progress of Task-7.0
Studies made by
Dr. Siva Sai
MARKERS FOR RBC FRAGILITY• Loss of cell surface area and morphology alteration
• Shedding of Hb content into plasma
• Level of cell surface Phosphatidyl serine (PS)
• Levels of cell membrane Stomatin
• Fluid Lipid Layer
• Semi flexible filaments (Spectrin)
• Deficiency of anti-oxidative enzymes
Progress of Task-6.0
Studies made by
Dr. Siva Sai
EFFECT OF OSMOTIC STRESS ON RBC IN RBC-ENDOTHELIAL INTERACTIONS
RBC and EC creates an adaptable life-support system.
• RBC isolation
• Inducing Osmotic stress to RBC
RBC Preparati
on
• Endothelial cells(EAhy926 cell line) cultured
• Seeding and allowed to confluence
Endothelial cell culture
STATIC
ADHESION
ASSAY
HYPOTONIC ISOTONIC HYPERTONIC
Progress of Task-7.0
STATIC ADHESION ASSAY
23-01-2020
35
Images were captured for the first and second wash by OlympusXL microscope
Analysis of the adhered RBC over EAhy926 monolayer were counted in using ImageJ software.
RBC were exposed to stress,
Added over 24 well plate confluent endothelial cell
In (1:500 dilution) and incubated in CO2 incubator for 30minutes
45% hematocrit - Red blood cells were isolated from healthy volunteers blood by centrifuging at 4000rpm for 5min
EAhy926 cell lines were grown to confluence in 24 well plate
RESULT
FIG (A) FIG (B) FIG (C)
STATIC ADHESION ASSAY RESULTS:
Progress of Task-7.0
Studies made by
Dr. Siva Sai
Inflammation
• To standardize different techniques associated with assessment of induction of
activation and Inflammation
• To understand the specific activation and inflammation markers (receptors, cytokines and
chemokines) enhanced post mechanical shear / stress
• Identification of factors leading to or enhancing thrombosis of platelets by indirect
mechanisms
• Development of strategies to minimize the effect of inflammation and factors
contributing to thrombosis post mechanical shear on blood
Progress of Task- 7.0
Studies made by
Dr. Siva Sai
Markers for Inflammation and WBC fragility
Complements Analysis C5-9
Cytokines (IL-1, IL-6, IL-8, IL-12 etc)
Cytokine Receptors
Chemokines & Chemokine receptors
Progress of Task-7.0
Studies made by
Dr. Siva Sai
Infections
• To identify the common skin infections at point of contact of
percutaneous lead with skin
• Assessment of common infections reported in case studies of AH
transplant – Region specific / Different in different regions
• Identification of existing and development of new intervention
strategies to prevent infections
Progress of Task-7.0
Studies made by
Dr. Siva Sai
BACTERIAL, VIRAL AND FUNGAL INFECTIONSNecrotic / Inflammatory lesions around the insertion site of a peripheral venous Catheter
Cutaneous – Mucormycosis (fungal infection) – Intravascular catheter related - Cardiac
transplant cases
Verruconis gallopava ----- Fungal infection ------ effecting heart, lung etc in renal transplant
patient
Progress of Task- 7.0
Studies made by
Dr. Siva Sai
OUTCOMES• Identification of stable markers which could be used for assessment of RBC fragility and
Inflammation studies.
• Application of those markers to assess whether the mechanical shear of indigenous AH
prototype developed as minimal effect on RBC & WBC.
• Identification of common infections associated at the insertion sites of skin would help in
controlling them.
• Infections in skin may vary from region to region, and hence better intervention strategies
could be developed to control them.
Progress of Task-7.0
Studies made by
Dr. Siva Sai
INFECTION STUDIES
• Identification of common infections
• Identification of region specific infections
•Assessment of existing and new intervention strategies
ESTIMATED TIMELINE – Continuous process
Major work starts post transplantation and continuous
observation of the sites
Progress of Task-7.0
Studies made by
Dr. Siva Sai
SUPPORTING FACILITIES FROM SREENIDHI
• Inverted / Phase contrast Microscope
• Spectroflourimeter
• Co2 Incubator
• BioSafety Cabinet
• Gel Doc
• Gas Liquid Chromatagraphy
• Spectrophotometer
• Eppendroff PCR Machine
Studies made by
Dr. Siva Sai
Progress of Task-7.0
THANK YOU
CUBESAT LAUNCH BY THE STUDENTS OF
SREENIDHI INSTITUTE OF SCIENCE AND
TECHNOLOGY, HYDERABD AT BALLOON
FACILITY TIFR, HYDERABAD.
SREESAT TEAM
1. Event High Light :
With a great applause Students of Sreenidhi Institute of Science and
Technology(SNIST) launched SREESAT-1 from Tata Institute of Fundamental Research
campus , Hyderabad on the early morning of 1st August 2019. SREESAT -1 is a remarkable
feat of SNIST student to achieve the goal of design and development of small satellite design
to monitor the atmospheric effect on earth environment. SREESAT-1, an emulated
CUBESAT will help in collecting environmental data related to the climatic conditions like
pressure, temperature, humidity and also the effect of sun radiation related to the ultraviolet
and infrared radiation. It will also obtain the location fix with the help of GPS.
The SREESAT-1 weighing about 400 gram was launched close to the start of ozone layers
along with the payload of other national and international organisation in a high altitude
balloon by a team of 25 students under the guidance of experts at TIFR. Entire SREESAT-1
was manufactured by the students as part of their summer internship program by using
commercial off the shelf components (COTS) costing around 25 thousand and has the
capability to record data for 4 hours.
2. Project Brief ( SREESAT): Satellite technology has established as one of the most promising technology of
present and future. This technology is able to meet several primary needs with respect to the
communication network, area surveillance and imaging etc. which has been helpful to every
sector of technology development. Satellite technology has also largely involved student
community to acquire experience of multi technology and also study various unexplored
areas of space and earth science. Student satellites are mostly referred to 'smallsat'. Most of
the Global Institutes including Indian Institutes are working in this area.
SNIST is probably the first Institute in Hyderabad to initiate ‗Small Sat‘ design,
development and launch as its one of the future milestone. The project is named as
SREESAT .SREESAT satellite club was inaugurated by Executive Director SNIST in March
2019.
The task is identified as 1U CUBESAT launch as a primary consideration. In order to
accomplish this task three distinct phases are identified. In the first phase a balloon launch is
proposed with emulated CUBESAT hardware with additional function of environment
monitoring. This project will help directly or indirectly to the Pollution Control Board of
Telengana and other parts of the country. Subsequently CUBESAT design development and
launch activity will be taken up. It is also intended to study the sun radiation and predict some
of the climate change related issues.
A series of discussion was held with authorities of Telengana Environmental Board
and TIFR balloon facility to finalise the configuration A large support of TIFR balloon
facility is extended to accomplish the task.
3. Introduction: Smallsat covers a very wide range of satellites that can be as small as femto satellite
with a mass as miniscule as 10 to 100 grams. ―Smallsat‖ is normally referred to a satellite
that has a mass of up to 500 kg or more. In between there are ―cubesats‖ that often range
between1 unit to 6 units in size. Such ‗cubesats‘ can have a mass as small as about 1 kg or
have a mass exceeding 10 kg for a 6-unit ‗cubesat‘. Small satellite mostly cubesats are
normally student project, or very targeted a specific scientific mission. It could be just one
out of a thousand small satellites designed for a massive low Earth orbit constellation. These
so-called small satellites might be mass produced with components spewed out by 3-D
printers, or painstakingly crafted by students working to create their own in orbit satellite.
The difference between the tiniest smallsat and a substantial smallsat can be more than three
orders of magnitude. Cubesat-sized systems to operate are essentially in an automated fashion
to support remote sensing, telecommunications, or other functions. This also enables lower
cost ground segments. Small satellites can be classified in the following .
o Minisatellite ( 100 - 500 kg)
o Microsatellite (10 - 100 kg)
o Nano satellite ( 1- 10 kg)
o Picosatellite ( 0.1 - 1 kg)
o Femtosatellite ( 0.01 - 0.1 kg)
CubeSat is a nanosatellite mostly used for the research projects. Kits are available up
to 1.33 Kg. Cubesat began as a collaborative effort in 1999 between Jordi Puig-Suari, a
professor at California Polytechnic State University (Cal Poly), and Bob Twiggs, a professor
at Stanford University‘s Space Systems Development Laboratory (SSDL). CubeSats have
enabled many major or smaller universities, high schools to enter into the space program.
FIGURE 1 shows Cubesat in the LEO Space.
FIGURE - 1
In addition to educational institutions, Government agencies and commercial groups
around the world have developed CubeSats. They recognized that the small, standardized
platform of the CubeSat can help reduce the costs of technical developments and scientific
investigations. This lowered barrier to entry has greatly increased access to space, leading to
an exponential growth in the popularity of CubeSats since their inception. Indian Space
Research Organisation (ISRO) has promoted multiple cube sat program from various
institutes. Following are the list of Indian Institutes that have participated in Small Sat
Launch program. A list of Institutes that are working in small satellite is given in Table 1.
Table 1 : List of Indian Institutes working and launched small satellite
Sl
No
.
Satellite Name Launch
Date
Launch
Mass
Launch Vehicle Name of the Institute
1 NIUSAT 23 June
2017
15 Kg PSLV-C38 /
Cartosat-2 Series
Satellite
Noorul Isalm University in Tamil
Nadu State. to provide
multispectral imagery for
agricultural crop monitoring and
disaster management support .
2 PISAT 26 Sep
2016
5.25 Kg PSLV-C35 /
SCATSAT-1
PES Institute of Technology,
Bengaluru. Remote sensing
nanosatellite
3 PRATHAM 26 SEP
2016
10 Kg PSLV-C35 /
SCATSAT-1
Institute of Technology
Bombay to count electrons in the
Earth's ionosphere.
4 SATHYABAM
ASAT
22 June
2016
1.5 Kg PSLV-C34 /
Cartosat-2 Series
Satellite
Sathyabama University, Chennai
to collect data on greenhouse
gases.
5 SWAYAM 22 June
2016
1 Kg PSLV-C34 /
Cartosat-2 Series
Satellite
Satellite from College of
Engineering, Pune. To provide
point to point messaging services
to the HAM Community.
6 Jugnu 12 Oct
2011
3 Kg PSLV-
C18/Megha-
Tropiques
Indian Institute of Technology
Kanpur. to provide data for
agriculture and disaster
monitoring.
7 SRMSat 12 Oct
2011
10.9 Kg PSLV-
C18/Megha-
Tropiques
SRM Institute of Science and
Technology. This nanosatellite
was used to monitor Greenhouse
gases in atmosphere.
8 STUDSAT 12 Jul 2010 Less
than 1
kg
PSLV-
C15/CARTOSA
T-2B
Consortium of seven engineering
colleges from Karnataka and
Andhra Pradesh. The mission's
objective was for students to have
a hands-on experience with the
design, fabrication and realisation
of a space mission at a minimum
cost.
9 ANUSAT 20 April
2009
40 Kg PSLV-C12 /
RISAT-2
The Anna University Satellite.
will demonstrate the technologies
related to message store and
forward operations..
Specific science investigation areas include: biological science, study of near Earth
objects, climate change, snow/ice coverage, orbital debris, planetary science, space-based
astronomy, and heliophysics.Communications, propulsion, navigation and control, and
radiation testing lead the topics in this area. Other notable technologies are solar sails,
additive manufacturing, femtosatellites, and smart phone satellites.
4. SNIST Scope of Work:
SNIST, one of the prestigious institutes in Hyderabad has initiated a satellite program to be
named as SREESAT. It is primarily aimed to demonstrate the capability of this institute to
expertise in satellite launch and also to monitor solar radiation which will help in weather
forecasting and also climate change. Cubesats are widely used by student satellite program.
CubeSats help to reduce costs. The standardized aspects of CubeSats make it possible
for companies to mass-produce components and offer off-the-shelf parts. CubeSats come in
several sizes, which are based on the standard CubeSat ―unit‖—referred to as a 1U. A 1U
CubeSat is a 10 cm cube with a mass of approximately 1 to 1.33 kg. In the years since the
CubeSat‘s inception, larger sizes have become popular, such as the 1.5U, 2U, 3U, and 6U,
but new configurations are always in development. Examples of a 1U and 3U are shown in
FIGURE 2. The cubesat also need an appropriate ground station.
Figure -2
3 U Standards
Dimensions :
10cm x 10cm x 34cm
1 U Standards
Dimensions :
10cm x 10cm x 11cm
The Institute has decided to approach the project in three steps.
Phase 1 : Proof of Concept through balloon launch and COTS components
Phase 2: Proof of Concept and Launch experience through off the shelf
Cubesat Kit and ground System.
Phase 3: Design and Development of in house hardware and Launch
capability.
This task will bring the institute in the Global arena.
5. Phase 1: Proof of concept through balloon launch
using COTS component.
A balloon Launch will meet two requirements.
a) It will allow the functioning of emulated cubesat and ground system interface
b) It will be used for the environment monitoring or different gases like NO2, NH3 and
CH4 etc. It will also record the temperature, humidity, altitude, sun light and
position.
Balloon Launch will be called as emulated Cubesat. There will be two parts a) On
board or SREESAT. And b) Ground Station.
SREESAT-1 : SREESAT-1 is configured to perform the initial development and the
majority of the work was accomplished during the summer intership program. This is aimed
to the Design and development of payload for Environment studies at higher altitudes. Our
first step was aimed to develop a payload for environmental studies at higher altitudes. With
the help of this project we could determine the climatic parameters such as pressure,
temperature and humidity varying at higher altitudes from the data retrieved from sensors.
We can also analyze the UV, IR and visible radiation from the data collected by a Si1145
sensor . This project can also be a useful for projects involving navigation and environmental
monitoring using different payloads. The entire data received from these sensors are stored in
SD CARD and PEN DRIVE. They will be further analysed to study environmental effects to
the earth atmosphere.
5.1. Description
We used two sensors which are E 2 0 and Si1145 . E 2 0 is used for measuring the
pressure , humidity and temperature . he nominal ranges of this sensor are ressure 300 –
1100 hpa, emperature -40 – 5 C, Humidity : -40 – 85% RH. Si1145, it is a low power,
reflectance based infrared proximity UV index and ambient light sensor. It delivers great
performance under a wide dynamic range and variety of light sources including direct
sunlight. In order to track the position we enabled the project with a GPS module which gives
us the data related to latitude, longitude, altitude, number of satellites connected and the
speed .
We have used two Arduino UNO and the entire circuit is powered up using Lithium ion
rechargeable battery of 3.7v along with a booster (MT3608) which gives out 8V. We also
provided the charging module (TP4056) for the battery.
Finally we are storing all the sensors data into the Ch375B disk read-write module and the
GPS data into the SD card module. We have put an Led indication to ensure the data is being
written on to the storage modules.
The Project is realised by the components available in the market. It has got four boards each
of size 10 cm x 10 cm.
1. Sensor board : All sensors referred above like Sun Sensor, Pressure, humidity,
Temperature, altitude(PHT) and GPS are mounted in this board. They are connected to the
respected Arduino UNO controller through I2C and Serial interface.
2. Sensor Controller Board1 : Arduino Uno Controller and pen drive interface is mounted
in this board. This board is interfaced with Sun Sensor and PHT sensors.A USB read-write
chip is added to log data in Pen drive.
3. Sensor Controller Board 2 : Another Arduino Uno Controller and SD-card interface is
mounted in this board. This bard is interfaced with GPS of Sensor Board and data is logged
in the SD card.
4. Power Board : The power board has battery ( 2600 mah). Battery can be charged on
board. The boosted output from battery is connected to the all other boards.There is also a
provision to connect 5v external 3.4 V external source
The boards are stacked one over other as shown in Fig 3. All boards are placed in the
thermocol box as shown in the Figure.
Fig3 : Students with Boards Ready to pack inside the thermocol box .
The final system is enclosed in the box as shown in Figure 4
There is a small connector to this box and also
a small window. The connector works as a switch
and when mated with one external connector
battery becomes operational. A small window is kept
to see the LED lamp glow to indicate the functioning
of the system.
Fig 4 : SREESAT-1
Finally this box was mounted on a
NASA Payload as shown Fig5.
Launch Video is shown in Fig6.
Fig 5 . SNIST payload on the top of NASA payload.
Fig 6 : Launch Video
6. Conclusion
We designed payload as a part of Sreesat project which is realized by the students of SNIST
with the close co-ordination of TIFR balloon facility team. The system is tested to the full
satisfaction the data recorded by the storage devices will be analyzed and further study will
be conducted to determine the atmospheric and sun related parameters.The Launch was
successfully conducted.
SREESAT-1 was safely recovered on Friday. Students like Naveen , Dhanush and Sahid said
that " this data will be further decoded and analysed by using MATLAB".
Another group of students like Pruthvi Shashank ,Sri Sai and Narender said that " SNIST
becomes the first engineering college in hyderabad to perform balloon launch"
Swathi, Anjali Shivani and mounika said " we are fascinated to see that work done by us is
going to create land mark in studying the effect of climate change ".
The Scientist cum Professor Dr. J Chattopadhyay mentor of the entire project, Dr. SPV Subba
rao , HOD ECE and Dr. S Krishna Associated Head ECE witnessed the successful launch and
appreciated each students for their hard work.
Secretary of the SNIST Dr. K T Mahhe, Executive director Dr .P.Narasimha Reddy and
Principal Dr. T. Ch. Siva Reddy congratulated the entire SREESAT team for their excellent
contribution to the launch of SREESAT -1.
Entire SNIST team appreciated the excellent support provided by Prof. D.K. Ojha ,
Chairperson, DAA & TIFR Balloon Facility Committee and Shri Suneel Kumar B, Scientist
in charge at TIFR Balloon Facility.
Shri D Anand , Senior Scientific Officer (TIFR-H) who has been a driving force to encourage
students said that the mission was fully successful and payload is safely recovered from a
distance of about 400 Km from Hyderabad. This will be handed over to the students for data
analysis.
7. FUTURE TASK :
1. 2nd Balloon launch with Data link, Pollution sensor and 3-d printed structure designed and
developed by SNIST Mechanical Department.
2. Realisation of Ground Station
3. Final SREESAT for satellite launch.
Dr. J Chattopadhyay Dr. SPV Subba Rao
Professor ECE HOD ECE
0
SREENIDHI INSTITUTE OF SCIENCE & TECHNOLOGY
(An Autonomous Institution approved by UGC and affiliated to JNTUH) (Accredited by NAAC with "A" Grade and NBA and World Bank Assistance under TEQIP-I & II))
Yamnampet, Ghatkesar, Hyderabad – 501 301.
DESIGN AND STATUS REPORT OF
AUTONOMOUS CAR PROJECT Complied by
Mr. M.R.K. Naidu
1
INDEX
Sl. No. CONTENT
1. INTRODUCTION
2. BLOCK DIAGRAM
3. SENSORS
3.1 RADAR
3.2 CAMERA
3.3 ULTROSONIC & IMU Sensor
4. CENTRAL COMPUTER
5. ALGORITHMS
5.1 IMAGE PROCESSING
5.2 DRIVE CONTROL
5.3 OTHER
6. STATUS
2
1.0 INTRODUCTION
TITLE: DESIGN & DEVELOPMENT OF AUTONOMOUS / DRIVERLESS CAR
PROJECT TEAM
1. Mr. M.R.K. Naidu, Consultant
2. Dr. Ameet Chavan, Dean (Innovation and Research) 3. Ms. K. Shivasundari , ECE 4. Ms. S. Latha, Assoc , ECE 5. Dr. T. Vijaya Saradhi, Professor, CSE 6. Dr. K. KranthiKumar, Assoc. Prof., IT 7. Dr. Ganesh kumar, ECE 8. Mr. K.P. Surya Teja, Asst. Prof., IT 9. Dr. M. Purnachander Rao, Assoc. Prof., ECE 10. Dr. S. Krishnanand, Assoc. Prof., ECM
OVERVIEW
A self-driving car or autonomous car or a driverless car avehiclethat is capable of sensing its environment and navigating without human input.
Autonomous cars combine a variety of techniques to perceive their surroundings, using sensors like radar, laser light, GPSand computer vision. Advanced control systemsinterpret sensory informationto identify appropriate navigation paths, as well as obstacles and relevant signage.
The potential benefits of autonomous cars include increased safety, increased mobility, increased customer satisfaction, and reduced accidents. These benefits also include a potentially significant reduction intraffic collisions resulting injuries; and related costs, including less need forinsurance.
3
SCOPE
The automotive standards define different levels autonomy (as in Appendix-A).It is proposed to realize level-3 autonomy i.e. In the right conditions, the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. • Driver involvement: The driver must be available to take over at any time.
It is proposed to implement the overall system on an e-Rickshaw platform. E-Rickshaw is reasonably open platform to integrate and does not require platform specific expertise. The modular approach makes the system to be adapted to any platform at a later stage. The Autonomous Car block diagram is shown in Appendix-B.
THE SPECIFICATIONS OF THE PROPOSED AUTONOMOUS CAR: Speed : 25Kmph Load : 500Kg (8 persons) Navigation : Way point &path planning. Control Computer : Rugged SBC with Linux-OS/ROS & application software. Engine Control ECU
Steering Control
Brake Control
Throttle Control
Horn & Indicators Sensors:
o Forward RADAR (Collision Avoidance).
o Reverse RADAR (Reverse Collision & Blind zone detection).
o 8 Ultrasonic Sensors (for short distance surround)
o Inertial MeasurementUnit (IMU)
o 1.3/2 Mega Pixel Camera
Drive :Drive ECU /650 w BLDC Platform: E-Rickshaw/ Golf cart/ Car-EV(Reva /Mahindra)----To be decided
4
OBJECTIVES
Identification and interfacing of sensor modules viz RADAR, Ultrasonic and
Camera
Development of Sensor Fusion and control ECU.
Development of following Algorithms:
Object detection, speed measurement of moving Objects and tracking using
RADAR.
Object detection & classification of - Vision sensor.
Collision avoidance.
Sensor Fusion.
Control algorithms for Steering, Speed, Brake & Horn.
Cruise Control.
Lane change & Park Assist.
In car Driver alert.
Navigation.
o Integrate all the subsystems & realize Autonomous Car (Level -3).
5
2.0 AUTONOMOUS CAR BLOCK DIAGRAM
6
Central Computer
CameraRear
CameraFront
Sensor Fusion ECU
RADARUltrasonicSensor(6)
NetworkingGSM/GPS
CANBUS
high Speed serial Bus CAN BUS
Drive Control ECU
ThrottleActuator
Horn & indicators Actuator
BrakeActuator
Vehicle EnvironmentSensors & Battery
Autonomous Car Block Diagram
7
RADAR
Video camera
Vision Detector s/wProcess Video Frames,Object Detection &
Classification,Lane boundaryies,update @10 Hz & lanes @ 20 Hz.
RADAR Detector s/wSwitched between Med/Long range modes
Object Detection(distance & angle)
Object Lists @20Hz
Ultrasonic Detector s/wObject Detections @20 Hz
IMU s/wCompute turn rate, speed & current postion
@ 20 Hz
Object Lists
Object Lists
Object Lists
Speed & turnrate
High Speed Serial
CAN BUS
CAN BUS
CAN BUS
Ultrasonic Sensor
IMU
Tracks
Current Position
Current Speed
Signal Processing & S/W Flow
1111111
5
1
2
3
4
8
Central Computer/Drive ECU
Drive Control Algorithm
Throttle
Steering angle
Horn,Indicators etc
Brake
Current Speed
Current Position
Tracks
Signal Processing & S/W Flow
6 7
Throttle ECU & Actuator
Steering ECU&
Actuator
Brake ECU & Actuator
Horn,Indicator ECU
IMU & other sensor o/p
9
3.1 RADAR MODULE by
Ms. S. Lata, ECE Block Diagram OVERVIEW: The design configuration has two RADARs forward & reverse. The selected module has two modes of operation i.e. Long range(LRR) and Short range (SRR). The RADAR module is interfaced directly to the Central Computer, Jetson Xavier/ Jetson Nano through CAN bus interface, where Sensor Fusion based Tracking algorithms are implemented. These modules are development starter kits from M/s D3 Engineering based on the TI AWR1643 chipset. Enquiry was sent to M/s D3 Engineering the response for supply is not encouraging hence the following alternative approaches are being worked out. The specifications are enclosed as annexure-1. APPROACH 1: To interface the RADAR module using CAN bus to Raspberry Pi 3/ module. The following activities are proposed during this phase:
Study of RADAR Module.
Understanding the CAN interface messages.
Interfacing the CAN shield to the Rpi3.
Simulating the CAN message packets of RADAR.
Implementing the above using Python programming language.
ArrayAntenna
3RX
RADAR MODULE
TX1
TX4
TX2
TX3
RX3
RX1
RX2
4 TX
H
L
10
APPROACH 2: As the availability of RADAR module is likely to take time an alternative LiDAR module from M/S Leddartech Vu8/M16/ platformis being considered. The only disadvantage is that this sensor supports up to distances of 80m. Testing is fully possible with this approach.
11
3.2 CAMERA MODULE By
Ms. K. Shivasundari , ECE
OVERVIEW: The design configuration is to have two cameras,Forward &Reverse. As the cameras are located in the front and back of the vehicle minimum interface cable length of 5m is desired and the present automotive standards support low overhead protocol are FPD-link III /GMSL2 / USB3. Automotive grade Camera from M/s D3 Engineering is identified. The specifications are enclosed as annexure-2. An enquiry was sent to M/s D3 Engineering the response for supply is not encouraging hencethe following alternative approach is being worked out. ALTERNATIVE APPROACH: To use COTS USB camera 2 Mp 1080 p resolution with H.264 video compression and variable focal length and the following activities are proposed during this phase:
Study of Camera Module Specifications:
Understanding the various Camera Interface standards for Automotive applications.
Interfacing the Camera CSI-2 interface to the Rpi3/ Jetson- Nano.
Testing the above interfaces using Python/C programming language.
CaptureImages & Video.
Jetson Nano AI Platform
Camera Module
Camera Module
USB
3
USB
3
Interface Diagram
12
3.3 ULTRASONIC & IMU SENSOR By
Dr. Ganesh kumar, ECE
3.3.1 ULTRASONIC SENSOR Design and development of 16 channel Ultrasonic Sensor Module HARDWARE: Raspberry Pi 3- 1 No. PGA450Q1EVM-S (Murtanga ultrasonic sensor MA58MF14-7Ns)-16 No., MCP2515 CAN Bus SPI Module- 16 No.. Software: Python or C BLOCK DIAGRAM:
Fig. 1 16 channel Ultrasonic Sensor Module
DESIGN AND DEVELOPMENT OF SINGLE CHANNEL ULTRASONIC SENSOR MODULE
The Ultrasonic Sensor (MA58MF14-7Ns) is used to measure the distance with high accuracy and stable readings. It can measure distance from 1meter to 7meter. It emits an ultrasound wave at the frequency of 40-70 KHz in the air and if the object will come in its way then it will bounce back to the sensor. By using that time which it takes to strike the object and comes back, we can calculate the distance.
The PGA450Q1EVM-S (evaluation module small form-factor) as shown in Fig. 2,
is a fully assembled PCB design for real-world evaluation of the PGA450-Q1 ultrasonic-sensor signal-conditioner device, an ultrasonic transducer (MA58MF14-7Ns), and step-up transformer. The PGA450-Q1 device is a fully integrated system on-a-chip analog front-end for ultrasonic sensing in automotive park-assist, object-detection through air,
13
level sensing in large tanks, and distance measurements for anti-collision and landing assist of unmanned systems (such as drones, cameras, and robots). This highly integrated device enables a small form-factor (PGA450Q1EVM-S) and cost-optimized solution compared to discrete ultrasonic-sensor solutions. The PGA450-Q1 device can measure distances ranging from less than 1 meter up to 7 meters, at a resolution of 1 cm depending on the transducer-transformer sensor pair used in the system. The PGA450-Q1 device has an integrated 8051 8-bit microcontroller and OTP memory for program storage to process the echo signal and calculate the distance between the transducer and targeted object. Full programmability is available for optimization of specific end applications, and to accommodate a wide-range of closed-top or open-top transducers. Configurable variables include the number of transmit pulses, driving frequency, LNA gain, and comparison signal thresholds. External communication with the PGA450-Q1 device is capable through the LIN 2.1 protocol, SPI, or UART interfaces.
Fig. 2 PGA450Q1EVM-S Schematic
The PGA450-Q1 integrates power management, low-side drivers, analog front-end,
digital datapath, and interface functions to form a full ultrasonic-sensor signal conditioning solution. The low-side drivers are programmed to drive a specific frequency that matches the external ultrasonic transducer. After transmitting, the same transducer receives the reflected echo signal. The analog front-end filters and amplifies this signal before storing the data in memory. The integrated 8051 microcontroller then processes this data to extract the useful information which typically includes how far away an object is from the transducer. At this point in the process, the information is transmitted through LIN, SCI, or UART. BASIC OPERATION:
To measure distance of object interface PGA450Q1EVM-S Ultrasonic sensor module to Raspberry Pi as shown in Fig. 3. The PGA450-Q1 device can operate from either OTP or DEVRAM memory, and from the LIN, UART, or SPI communication interfaces. Here we consider SPI for communication. Python program is used to communicate with the device.
14
Fig. 3 Interface PGA450Q1EVM-S Ultrasonic sensor module to Raspberry Pi to measure distance SPI PROTOCOLS: The serial peripheral interface (SPI) uses a 1-byte command word and 2 or 3 additional bytes for the complete command. Table 1 lists how the SPI transfer width (number of bytes) varies depending on whether the SPI is a read or write to the IRAM, ESFR, EEPROM, OTP, or FIFO data access. For a SPI transfer to the internal register file, the parity P depends on the address. For SPI transfers to the memories (IRAM, ESFR, OTP), the read data is available on the next SPI transfer. That is, when reading from a memory location, the user must send a subsequent transfer to get the data back.
Table 1 SPI transfer width
15
STEPS FOR SERIAL-PERIPHERAL INTERFACE:
1. The PGA450-Q1 device can be put into a RESET state where the microcontroller
is not active. During this state, SPI is the only digital interface that can be used.
2. The low-side drivers (EN_CTRL's BURST_A_EN bit must be toggled from 0 -->
1) can be triggered to begin an ultrasonic burst and the analog front-end and
digital data path can still store the returned echo signal in the FIFO RAM.
3. The FIFO RAM data can be read over SPI, allowing an external microprocessor
(Raspberry Pi 3) to process the data. This is 768 bytes of echo data stored in
FIFO memory.
4. To convert the FIFO memory data point into a distance equivalent, the following
equation is used:
Distance(m)=[BlankingTime + (FIFOSample number * Downsample * ADCSampleRate)] * Speed ofSound /2 Where • Blanking Time = 0 to 255dec = 0 to 4.08ms • FIFOSampleNumber = 0 to 767 • Downsample rate = 25 to 50 • ADCSampleRate = 1MSPS (1us) • Speed ofSound (at room temp) = 343m/s An example: [2.048ms + (500 * 40 * 1us)] * 343m/s / 2 = 3.78m
OUTCOME AND FUTURE WORK: The Texas Instruments PGA450-Q1 small form-factor evaluation module (EVM) allows to measure distance of the object. This can be extended to connect 16 ultrasonic sensors to the Electronic Control Unit using MCP2515 CAN-bus modules. 3.3.2INERTIAL MEASUREMENT UNIT An inertial measurement unit (IMU) is an electronic device that measures and reports a body's (Autonomous Car) specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers.
16
GYROS - A gyro measures the rate of rotation, which has to be tracked over time to calculate the current angle. This tracking causes the gyro to drift. However, gyros are good at measuring quick sharp movements. ACCELEROMETERS- Accelerometers are used to sense both static (e.g. gravity) and dynamic (e.g. sudden starts/stops) acceleration. They don’t need to be tracked like a gyro and can measure the current angle at any given time. Accelerometers however are very noisy and are only useful for tracking angles over a long period of time. SETTING UP THE IMU AND I2C The IMU used for this guide a BerryIMU which uses a LSM9DS0, which consists of a 3-axis gyroscope, a 3-axis accelerometer and a 3-axis magnetometer. This IMU communicates via the I2C interface. The image below shows how to connect the BerryIMU to a Raspberry Pi.
Raspberry pi IMU
17
4.0 CENTRAL COMPUTER By
Dr. T. Vijaya Saradhi, CSE& Mr. O. Madhu, ECE Is the heart of Autonomous Car the hardware platform design is based on Jetson Xavier high end AI embedded platform. The ECU for sensor fusion is designed using Jetson Nano AI embedded platform. The hardware supports the connectivity to all the sub-systems is based highly integrated Arm CPU & nvdia GPU. The embedded software is for functional implementation of the following: ROS & Linux based ECU(Electronic Control Unit) various drive Algorithms to control the navigation of Autonomous Car based on the inputs from sensors like RADAR, Camera, Ultrasonic & IMU. Application Software implements Control,HMI, Navigation and networking modules. JETSON XAVIER
SPECIFICATIONS:
GPU 512-core Volta GPU with Tensor Cores
CPU 8-core ARM v8.2 64-bit CPU, 8MB L2 + 4MB L3
18
Memory 16GB 256-Bit LPDDR4x | 137GB/s
Storage 32GB eMMC 5.1
DL Accelerator (2x) NVDLA Engines*
Vision Accelerator 7-way VLIW Vision Processor*
Encoder/Decoder (2x) 4Kp60 | HEVC/(2x) 4Kp60 | 12-Bit Support
Size 105 mm x 105 mm
Deployment Module (Jetson AGX Xavier)
Developer Kit I/Os Jetson AGX Xavier Module Interface
PCIe X16 x8 PCIe Gen4/x8 SLVS-EC
RJ45 Gigabit Ethernet
USB-C 2x USB 3.1, DP (Optional), PD (Optional) Close-System Debug and Flashing Support on 1 Port
Camera Connector (16x) CSI-2 Lanes
M.2 Key M NVMe
M.2 Key E PCIe x1 + USB 2.0 + UART (for Wi-Fi/LTE) / I2S / PCM
40-Pin Header UART + SPI + CAN + I2C + I2S + DMIC + GPIOs
19
Developer Kit I/Os Jetson AGX Xavier Module Interface
HD Audio Header High-Definition Audio
eSATAp + USB3.0 Type A SATA Through PCIe x1 Bridge (PD + Data for 2.5-inch SATA) + USB 3.0
HDMI Type A/eDP/DP HDMI 2.0, eDP 1.2a, DP 1.4
usD/UFS Card Socket SD/UFS
JETSON NANO
SPECIFICATIONS:
GPU NVIDIA Maxwell architecture with 128 NVIDIA CUDA® cores
CPU Quad-core ARM Cortex-A57 MPCore processor
Memory 4 GB 64-bit LPDDR4, 1600MHz 25.6 GB/s
Storage 16 GB eMMC 5.1
Video Encode 250MP/sec 1x 4K @ 30 (HEVC) 2x 1080p @ 60 (HEVC) 4x 1080p @ 30 (HEVC) 4x 720p @ 60 (HEVC)
20
9x 720p @ 30 (HEVC)
Video Decode 500MP/sec 1x 4K @ 60 (HEVC) 2x 4K @ 30 (HEVC) 4x 1080p @ 60 (HEVC) 8x 1080p @ 30 (HEVC) 9x 720p @ 60 (HEVC)
Camera 12 lanes (3x4 or 4x2) MIPI CSI-2 D-PHY 1.1 (1.5 Gb/s per pair)
Connectivity Gigabit Ethernet, M.2 Key E
Display HDMI 2.0 and eDP 1.4
USB 4x USB 3.0, USB 2.0 Micro-B
Others GPIO, I2C, I2S, SPI, UART
Mechanical 69.6 mm x 45 mm 260-pin edge connector
21
5.0ALGORITHMS
22
5.1 VISION PROCESSING MODULE By
Dr.T.VijayaSaradhi.
IMPLEMENTATION OF VISION PROCESSING PLAN CONSISTS OF 4 STEPS
Step-1: History of autonomous vehicles and overview of its vision processing module
and technology.
Step-2: Build a custom image processing module on RC car controller with Jetson
Nano and Python.
Step-3:Implement, train, and deploy a baseline version of image processing module
tasks with Keras.
Step-4: Refractor, re-trains, and re-deploys an improved image processing tasks with
Keras.
STEP-1:
In this project we are developing level-3 autonomous vehicle. These vehicles that can drive themselves in certain situations, such as in traffic on divided highways. When in autonomous mode, human intervention is not needed. But a human driver must be ready to take over when the vehicle encounters a situation that exceeds its limits.
IMAGE PROCESSING MODULE DESCRIPTION:
This is the core module in autonomous vehicle project. In this module, vehicles central computer considers input data from different vision sensors like cameras, Lidars. My module completes its intended functionality with the help of NVIDIA’s predefined functionalities (JET PACK) and facilities like HD Maps and localization techniques. My module is divided into many subsystems responsible for tasks such as:
1) Lane detection 2) Object Detection 3) Object classification 4) Collision avoidance.
23
This module identifies the lanes of the road by processing camera and Lidar data. This will be useful to identify of the ROI and the objects which lies on ROI. To detect Lanes we are developing an efficient algorithm .We are planning two different versions to identify lanes. One using CNN based which can leads to drive our vehicle even though lanes are not completely available, second by using image processing techniques available in open CV library in python.
IN CNN BASED APPROACH:
Lane keeping is an important feature for self driving cars. The convolutional neural network (CNN) model takes raw image frames as input and outputs the steering angles accordingly. The model is trained and evaluated using dataset which contains the front view image frames and the steering angle data captured when driving on the road. Unlike the traditional approach that manually decomposes the autonomous driving problem into technical components such as lane detection, path planning andsteering control, the
end-to-end model can directly steer the vehicle from the front view camera data after training. It learns how to keep in lane from human driving data.
Diagram: CNN approach
THE TECHNOLOGY BEHIND SELF-DRIVING CARS
To building self-driving car we needthe following technology which makes self-driving possible.
CAMERAS:
Most self-driving cars utilize multiple cameras for mapping its surrounding. Unlike LIDAR, cameras can pick up lane markings, traffic lights, road signs and other signals, which give a lot more information for the car to navigate on roads. Most self-driving cars use a combination of sensors and cameras, but with deep learning computer vision playing a major role in self-driving technology.
24
DEEP LEARNING FOR SELF-DRIVING CARS:
As we know already, cameras are key components in most self-driving vehicles. Most of the camera tasks fall into some type of computervisiondetection or classification problem. Recent advancements in deep learning and computer vision can enable self-driving cars to do these tasks easily.
CNN ARCHITECTURE:
The model consists of 5 convolution layers, 1 normalization layer and 3 fully connected layers. The network weights are trained to minimize the mean-squared error between the steering command output by the network and the ground truth.
This convolution neural network (CNN) has about 27 million connections and 250 thousand parameters.
DATA COLLECTION:
The training data is the image feed from the cameras and the corresponding steering angle.
The data collection is quite extensive considering the huge number of possible scenarios the system will encounter. The data is collected from a wide variety of locations, climate conditions, and road types. Also, training with data from the human drivers is not enough. The network should learn to recover from mistakes to provide its best.
25
In order to solve this problem, the data is augmented with additional images that show different positions where the car is shifting away from the center of the lane and different rotations from the direction of the road. For example, the images for two specific off-center shifts from the left and right cameras and the remaining range of shifts and rotations are simulated using viewpoint transformation of the image from the nearest camera.
The collected data is labeled according to road type, weather conditions, and driver’s activity. The driver could be staying on lane, changing lane, turning and so on.
In order to train a convolutional neural network (CNN) that can stay on lane, we take only the images where the driver is staying on lane. The images are also down sampled to 10 frames-per-second (FPS), as many of the frames would be similar and wouldn’t provide more information for the CNN model. Also, to avoid a bias towards driving straight all the time, more frames that represent road curves are added to the training data.
TRAINING:
In order to train the model, data from three cameras as well as the corresponding steering angle is used. The camera feeds and the steering commands are time-synchronized so each image input has a steering command corresponding to it.
TRAINING THE NEURAL NETWORK: Images are fed into the CNN model which outputs a proposed steering command. The proposed steering command is then compared with the actual steering command for the given image, and the weights are adjusted to bring the model output closer to the desired output. Once trained, the model is able to generate steering commands from the image feeds coming from the single center camera.
26
The trained network is used to generate steering commands from a single front-facing center camera.
STEP-2: BUILDING IMAGE PROCESSING MODULES BRIEF DETAILS OF LANE DETECTION ALGORITHM:
Read the road image or frame from video
Convert that to gray scale ( convert 3 channel to single channel) to increase
processing speed
Apply one filter, probably Gaussian filter to make image smoothening to avoid
unnecessary noice processing using (nxn) kernel.
Apply cannon Edge detection to find edges of the road using gradient method using
derivates as well as using low and high threshold.
Next we need to Finding ROI with the help of standard matplotlib and with the
help of Mask operation using logical operations like “logical and” operation.
Finding lanes with the help of Hough transformations and by using polar coordinates
we can predict the lane boundaries clearly.
Align this with the original image so that we can identify the lanes.
If it is video we need to repeat the process using the loop, which processes each
frame of the video.
With the combination of above algorithm and with neural network we can develop lane departure warning system
In our module second and third task is object detection and Classification. For this we identified most efficient algorithm by studying literature, YOLO (You Look Only once) algorithm. This is CNN based algorithm it reduces the complexity of Faster R-CNN. We can implement YOLO algorithm using Microsoft’s COCO model, which consists
27
features of 80 standard classes and Google’s Tensor flow, which can import as standard library in python. We are using DarkNet framework for development .After developing this we can train our CNN with our own set of object images as well as our videos to improve identification and classification tasks. Apart from identification of objects along with its name we are trying to measure the distance of the each object. In this module we want to add some more standard objects to the existing object set. Here we are trying to solve the problem of handling unknown objects.
2. OBJECT DETECTION AND CLASSIFICATION ALGORITHM: Import necessary python libraries like openCV and matplotlib.
Import existing pre trained CNN Darknet.
Load configuration files and trained weights into the existing DarkNetframework by instantiation.
Load the tensor flow COCO model Data set to identifying and classifying the object class labels.
Resize the image to fit into the size of the first layer in the neural network.
It divides the image in SxS grids (probably 13X13).
Determine Non Maximal Suppression(NSM) threshold probability so that only it consider only the high probability bounding boxes
Set the Interaction over Union (IOU) threshold to predict the bounding box .This will be useful when multiple objects are there in a region as well as to discard similar grids.
Using this only we can detect classify objects with probability greater than some threshold.
Example: convolution layers
28
3. MOTION DETECTION: In Our module another task is motion detection. Using standard image processing libraries we built our program to identifying objects which are in motion. This can be used to distinguish static and dynamic objects.
4. COLLISION AVOIDANCE: Forward-collision warning (FCW) uses cameras, radar, or laser (or some combination thereof) to scan the road ahead and to alert the driver if the distance to a vehicle ahead is closing too quickly. The systems alert the driver with an audible alarm. As a part of initial implementation we are developing a CNN which changes its paths to some specified angle to its length if it finds any obstacle.
STEP-3:
IMPLEMENTATION AND TRAINING THE MODEL:
1. MOTION DETECTION IMPLEMENTATION: import cv2 import numpy as np w=800 h=600 cap=cv2.VideoCapture(0) cap.set(3,w) cap.set(4,h) #print(cap.get(3)) #print(cap.get(4)) if cap.isOpened(): ret,frame=cap.read() else: ret=False while ret: ret,frame1=cap.read() ret,frame2=cap.read() d=cv2.absdiff(frame1,frame2)
29
gray=cv2.cvtColor(d,cv2.COLOR_BGR2GRAY) blur=cv2.GaussianBlur(gray,(5,5),0) ret,th=cv2.threshold(blur,20,255,cv2.THRESH_BINARY) dilated=cv2.dilate(th,np.ones((3,3),np.uint8),iterations=3) c,h=cv2.findContours(dilated,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) #frame3=frame1 cv2.drawContours(frame1,c,-1,(0,0,255),2) cv2.imshow("original",frame2) cv2.imshow("dup",frame1) if cv2.waitKey(1)==27: break frame1=frame2 # ret,frame2=cap.read() cv2.destroyAllWindows() cap.release() 2. EYES AND FACE DETECTION IMPLEMENTATION: """import cv2 #img=cv2.imread("C:\\Users\\kanth\\OneDrive\\Desktop\\open cv\\pic.jpg",1) #print(img) #cv2.imshow('image',img) cap=cv2.VideoCapture(0); while True: ret,frame=cap.read() # gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF==ord('q'): break cap.release() cv2.destroyAllWindows()""" import cv2 import numpy as np face_cascade=cv2.CascadeClassifier('C:/Users/kanth/Downloads/opencv/sources/data/haarcascades/haarcascade_frontalface_default.xml') eye_cascade=cv2.CascadeClassifier('C:/Users/kanth/Downloads/opencv/sources/data/haarcascades/haarcascade_eye.xml')
30
#watch_cascade=cv2.CascadeClassifier('C:/Users/kanth/Downloads/opencv-4.0.1-vc14_vc15/data/haarcascades/watch-cascade-12stages.xml') cap=cv2.VideoCapture(0); while True: ret,img=cap.read() gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces=face_cascade.detectMultiScale(gray,1.3,5) #watches=watch_cascade.detectMultiScale(gray,50,50) for (x,y,w,h) in faces: cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray=gray[y:y+h,x:x+w] roi_color=img[y:y+h,x:x+w] eyes=eye_cascade.detectMultiScale(roi_gray) for(ex,ey,ew,eh) in eyes: cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv2.imshow('img',img) if cv2.waitKey(1)==27: #& 0xFF==ord('s'): break cap.release() cv2.destroyAllWindows()
EXECUTION SCREEN SHOTS:
31
3. OBJECT CLASSIFICATION (YOLO) IMPLEMENTATION: """import cv2 #img=cv2.imread("C:\\Users\\kanth\\OneDrive\\Desktop\\open cv\\pic.jpg",1) #print(img) #cv2.imshow('image',img) cap=cv2.VideoCapture(0); while True: ret,frame=cap.read() # gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF==ord('q'): break cap.release() cv2.destroyAllWindows()""" import cv2
import numpy as np face_cascade=cv2.CascadeClassifier('C:/Users/kanth/Downloads/opencv/sources/data/haarcascades/haarcascade_frontalface_default.xml') eye_cascade=cv2.CascadeClassifier('C:/Users/kanth/Downloads/opencv/sources/data/haarcascades/haarcascade_eye.xml') #watch_cascade=cv2.CascadeClassifier('C:/Users/kanth/Downloads/opencv-4.0.1-vc14_vc15/data/haarcascades/watch-cascade-12stages.xml') cap=cv2.VideoCapture(0); while True: ret,img=cap.read() gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces=face_cascade.detectMultiScale(gray,1.3,5) #watches=watch_cascade.detectMultiScale(gray,50,50) for (x,y,w,h) in faces: cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray=gray[y:y+h,x:x+w] roi_color=img[y:y+h,x:x+w] eyes=eye_cascade.detectMultiScale(roi_gray) for(ex,ey,ew,eh) in eyes: cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv2.imshow('img',img) if cv2.waitKey(1)==27: #& 0xFF==ord('s'): break cap.release() cv2.destroyAllWindows()
32
EXECUTION SCREEN SHOTS:
33
STEP-4: REFRACTOR, RE-TRAINS, AND RE-DEPLOYS IMPROVED CODES
FOR TESTING: Once the trained model achieves good performance in the simulator, it is loaded on the JETSON TX2 in the test car. JETSON TX2 is a computer specially designed for autonomous cars. For the on-road tests, the performance metrics are calculated as the fraction of time during which the car is performing autonomous steering 5.1 IMAGE PROCESSING Module Description: This is the core module in autonomous vehicle project. In this module vehicles central computer considers input data from different vision sensors like cameras, Lidars. Core functionalities of this module are 1) Lane detection 2) Object Detection 3) Object classification 4) Motion Detectionof objects. This module identifies the lanes of the road by processing camera and Lidar data. This will be useful to identify of the ROI and the objects which lies on ROI. To detect Lanes we are developing an efficient algorithm .We are planning two different versions to identify lanes. One using CNN based which can leads to drive our vehicle even though lanes are not completely available, second by using image processing techniques available in open CV library in python. BRIEF DETAILS OF ALGORITHM:
Read the road image or frame from video
Convert that to gray scale ( convert 3 channel to single channel) to increase processing speed
Apply one filter, probably Gaussian filter to make image smoothening to avoid unnecessary notice processing using (nxn) kernel.
Apply cannon Edge detection to find edges of the road using gradient method using derivates as well as using low and high threshold.
Next we need to Finding ROI with the help of standard mat plotlib and with the help of Mask operation using logical operations like “logical and” operation.
Finding lanes with the help of Hough transformations and by using polar coordinates we can predict the lane boundaries clearly.
Align this with the original image so that we can identify the lanes.
If it is video we need to repeat the process using the loop, which processes each frame of the video.
With the combination of above algorithm with neural network we can develop lane departure warning system
34
5.2 IMAGE PROCESSING ALGORITHM TO DETECT OBJECTS LIKE ROAD LANE, SPEED BREAKER, POT HOLEBY CAMERA ON
INDIAN ROAD SCENARIO
By
O. Madhu, ECE
CHALLENGES FACED IN INDIAN ROADS: Taking Human Direction: In India, although most of the places are operated
automatically by traffic signals there are places at times, where the traffic lights fail to work as intended, so the system has to take inputs from the traffic police and move accordingly.
Lane Detection: Compared to US or UK the roads in India are more curvy and
irregular by nature, so the system must be able to identify the turns and curves accurately and make decision according to it.
Speed Breakers Detection: Due to inconsistent speed of vehicles there are
several speed stoppers alongside the path of road. The major task of the system is to identify a speed breaker, which can be of any type and slow down the vehicle accordingly.
Pit-holes and Pot-holes Detection: Due to diverse weather conditions in
India the roads tend to damage quickly and lead to pot-holes, the autonomous driving car should be able to detect pot- holes and perform certain action based on it.
Animal Detection: There are times when an animal such as cow, dog or a cat
can interfere vehicle’s path while traversing this is a challenging task to overcome as the car should sense the animal and take appropriate action based on it.
LANE DETECTION ALGORITHM USING OPENCV AND PYHTON LANGUAGE
1. import cv2 as cv
2. import numpy as np
3. from numpy import ones,vstack
4. from numpy.linalg import lstsq
5. from statistics import mean 6. def auto_canny(image, sigma=0.33):
a. # compute the median of the single channel pixel intensities
35
b. v = np.median(image) c. # apply automatic Canny edge detection using the computed median
d. lower = int(max(0, (1.0 - sigma) * v))
e. upper = int(min(255, (1.0 + sigma) * v))
f. edged = cv.Canny(image, lower, upper) g. # return the edged image
h. return edged 7. def draw_lanes(img, lines, color=[0, 255, 255], thickness=2): 8. # if this fails, go with some default line
9. try: a. # finds the maximum y value for a lane marker
b. # (since we cannot assume the horizon will always be at the same point.)
c. ys = []
d. for i in lines: i. for ii in i: 1. ys += [ii[1],ii[3]] e. min_y = min(ys)
f. max_y = 800
g. new_lines = []
h. line_dict = {} i. for idx,i in enumerate(lines): i. for xyxy in i:
1. # These four lines:
2. # modified from http://stackoverflow.com/questions/21565994/method-to-return-the-equation-of-a-straight-line-given-two-points
3. # Used to calculate the definition of a line, given two sets of coords.
4. x_coords = (xyxy[0],xyxy[2])
5. y_coords = (xyxy[1],xyxy[3])
6. A = np.vstack([x_coords,ones(len(x_coords))]).T
7. m, b = lstsq(A, y_coords)[0] 8. # Calculating our new, and improved, xs
9. x1 = (min_y-b) / m
10. x2 = (max_y-b) / m 11. line_dict[idx] = [m,b,[int(x1), min_y, int(x2), max_y]]
12. new_lines.append([int(x1), min_y, int(x2), max_y]) j. final_lanes = {} k. for idx in line_dict: i. final_lanes_copy = final_lanes.copy()
ii. m = line_dict[idx][0]
iii. b = line_dict[idx][1]
36
iv. line = line_dict[idx][2] v. if len(final_lanes) == 0: 1. final_lanes[m] = [ [m,b,line] ] vi. else: 1. found_copy = False 2. for other_ms in final_lanes_copy: a. if not found_copy:
b. if abs(other_ms*1.2) > abs(m) > abs(other_ms*0.8): i. if abs(final_lanes_copy[other_ms][0][1]*1.2) > abs(b) > abs(final_lanes_copy[other_ms][0][1]*0.8): 10. final_lanes[other_ms].append([m,b,line]) 1. found_copy = True
2. break b. else: i. final_lanes[m] = [ [m,b,line] ] b. line_counter = {}
c. for lanes in final_lanes: i. line_counter[lanes] = len(final_lanes[lanes]) d. top_lanes = sorted(line_counter.items(), key=lambda item: item[1])[::-1][:2]
e. lane1_id = top_lanes[0][0]
f. lane2_id = top_lanes[1][0]
g. def average_lane(lane_data): i. x1s = []
ii. y1s = []
iii. x2s = []
iv. y2s = []
v. for data in lane_data: 1. x1s.append(data[2][0]) 2. y1s.append(data[2][1])
3. x2s.append(data[2][2])
4. y2s.append(data[2][3]) vi. return int(mean(x1s)), int(mean(y1s)), int(mean(x2s)), int(mean(y2s)) h. l1_x1, l1_y1, l1_x2, l1_y2 = average_lane(final_lanes[lane1_id])
i. l2_x1, l2_y1, l2_x2, l2_y2 = average_lane(final_lanes[lane2_id]) j. return [l1_x1, l1_y1, l1_x2, l1_y2], [l2_x1, l2_y1, l2_x2, l2_y2] 11. except Exception as e: a. print(str(e))
37
12. def region_of_interest(img, vertices):
13. """
14. Applies an image mask. 15. Only keeps the region of the image defined by the polygon
16. formed from `vertices`. The rest of the image is set to black.
17. """
18. #defining a blank mask to start with
19. mask = np.zeros_like(img) 20. #defining a 3 channel or 1 channel color to fill the mask with depending on the input image
21. if len(img.shape) > 2: a. channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
b. ignore_mask_color = (255,) * channel_count 22. else:
a. ignore_mask_color = 255 23. #filling pixels inside the polygon defined by "vertices" with the fill color
24. cv.fillPoly(mask, vertices, ignore_mask_color) 25. #returning the image only where mask pixels are nonzero 26. masked_image = cv.bitwise_and(img, mask)
27. return masked_image 28. cap = cv.VideoCapture("original1.mp4")
29. ret, frame1 = cap.read()
30. prvs = cv.cvtColor(frame1,cv.COLOR_BGR2GRAY)
31. hsv = np.zeros_like(frame1)
32. hsv[...,1] = 255 33. max_width = frame1.shape[1]
34. max_height = frame1.shape[0]
35. width_delta = int(max_width/5) 36. #vertices = np.array([[(100, max_height), (max_width -100, max_height), (max_width/2 + width_delta, max_height/2 + 50), (max_width/2 - width_delta, max_height/2 + 50)]], np.int32) 37. vertices = np.array([[(50, max_height), (max_width, max_height), (max_width/2 + width_delta, max_height/2 ), (max_width/2- width_delta, max_height/2 )]], np.int32)
38
38. #vertices = np.array([[10,500],[10,300],[300,200],[500,200],[800,300],[800,500]], np.int32)
39. minLineLength = 100
40. maxLineGap = 10
41. cnt=1
42. while(1):
43. ret, frame2 = cap.read()
44. next = cv.cvtColor(frame2,cv.COLOR_BGR2GRAY)
45. #mask=np.zeros_like(next)
46. Guass=cv.GaussianBlur(next,(5,5),0)
47. canny_edges = cv.Canny(Guass,200,300)
48. #canny_edges=auto_canny(Guass) 49. roi_image = region_of_interest(canny_edges, vertices)
50. #lines = cv.HoughLinesP(roi_image,1,np.pi/180,100,minLineLength,maxLineGap)
51. #lap = cv.Laplacian(roi_image,cv.CV_64F)
52. #lines = cv.HoughLinesP(roi_image, 6, np.pi/60, 160, 20, 15)
53. lines = cv.HoughLinesP(roi_image, rho=6, theta=np.pi /60 ,threshold=160, lines=np.array([]), minLineLength=40, maxLineGap=25)
54. try: a. l1, l2 = draw_lanes(next,lines)
b. cv.line(frame2, (l1[0], l1[1]), (l1[2], l1[3]), [0,255,0], 20)
c. # cv.line(frame2, (l2[0], l2[1]), (l2[2], l2[3]), [0,255,0], 20)
55. except Exception as e: a. pass 56. try: a. for coords in lines: i. coords = coords[0]
ii. try: 1. cv.line(roi_image, (coords[0], coords[1]), (coords[2], coords[3]), [255,0,0], 3)
2. #cv.Rectangle(roi_image, (coords[0], coords[1]), (coords[2], coords[3]), [255,255,0], thickness=1, lineType=8, shift=0) iii. except Exception as e:
39
1. print 'string(e)'
57. except Exception as e: a. print 'string(e)'
b. pass 58. cv.imshow('line_detect.jpg',frame2)
59. if cnt==10: a. cv.imwrite('LANE_detected.jpg',frame2) 60. #cv.imshow('canny_edges',roi_image) 61. #cv.imshow('frame2',bgr)
62. k = cv.waitKey(30) & 0xff
63. if k == 27: a. break 64. elif k == ord('s'): a. cv.imwrite('opticalfb.png',frame2)
b. cv.imwrite('opticalhsv.png',bgr) 65. prvs = next
66. cnt=cnt+1
67. cap.release()
68. cv.destroyAllWindows()
A ORIGINAL IMAGE
40
EDGE DETECTED OUTPUT
REGION OF INTEREST OF EDGED IMAGE
41
LANE DETECTED OUTPUT
SPEED BREAKERS DETECTION ALGORITHM USING CAMERA
SPEED BREAKER DETECTION BLOCK DIAGRAM
42
ORIGINAL IMAGE FETCHED BY CAMERA, LABELED AS ‘1’.,
PROCESSED IMAGE USING IMAGE PROCESSING TOOLS
43
ORIGINAL IMAGE FETCHED BY CAMERA, LABELED AS ‘0’
PROCESSED IMAGE USING IMAGE PROCESSING TOOLS
44
POT-HOLES:
POTHOLE IMAGE
EDGE DETECTED IMAGE OF POTHOLE
45
OBSERVATIONS AND FUTURE WORK: The algorithms are designed and tested in Python Opencv environment on personnel computer. The algorithms are designed based on basic understanding of concept and implemented in python Opencv i.e. Computer Vision and image algorithms but it has compare with industry state of the art algorithms. These algorithms are test for robustness in this aspect we have to train the algorithms for many test samples. So that, we have to adapt machine learning and deep learning, neural networks techniques. The above algorithm has to test on NVIDIA JETSON Nano. We are happy inform that, we have communicated paper as “Design challenges and Implementation of Autonomous vehicle for Indian scenario using NVIDIA JETSON NANO” at International Conference on Communications, Signal Processing and VLSI (IC2SV2019) conducted by NITW.
46
5.3 DRIVE CONTROL Algorithms by
Dr. Krishnand
The purpose of this project is the creation of an autonomous car which should be able todrive automatically without any driver in the urban areas. Road traffic injuries caused an estimated 2.5 million deaths worldwide per year more than 50 million injured. A study usingBritish and American crash reports as data, found that 87% of crashes were due solely to driver factors. A self-driving car can be very safe and useful for the entire mankind. In order to realize this, several concurrent software applications process data using Artificial Intelligence to recognize and propose a path which an intelligent car should follow. Additionally a driverless car can reduce the distance between the cars, lowering the degree of road loadings, reducing the number of traffic jams, avoiding human errors, and allowing disabled people (even blind people) to drive long distances. A computer as a driver will never make any error. It will be able to calculate very precisely in order to reduce the distance between cars. REGRESSION ALGORITHMS
This kind of algorithm is good at predicting events. The Regression Analysis evaluates the relation between 2 or more variables and collate the effects of variables on distinct scales and are driven mostly by 3 metrics:
The shape of regression line. The type of dependent variables. The number of independent variables. The images (camera or radar) play a significant role in ADAS in actuation and
localization, while for any algorithm, the biggest challenge is to develop an image-based model for feature selection and prediction.
The repeatability of the environment is leveraged by regression algorithms to create a statistical model of relation between the given object’s position in an image and that image. The statistical model, by allowing the image sampling, provides fast online detection and can be learned offline. It can be extended furthermore to other objects without the requirement of extensive human modeling. An object’s position is returned by an algorithm as the online stage’s output and a trust on the object’s presence. The regression algorithms can also be utilized for short prediction, long learning.
NEURAL NETWORK REGRESSION The neural networks are utilized for regression, classification or unsupervised learning. They group the data that is not labeled, classify that data or forecast continuous values after supervised training. The neural networks normally use a form of logistic regression in the final layer of the net to change continuous data into variables like 1 or 0.
47
Cascaded Neural Networks to recognize factors like traffic signs Feed-forward neural network to recognize whether object is a car. Genetic algorithms to find best topology of network
48
Various characteristics ( Dynamically changeable based on need ) Input from images Inputs from radar Steering angle logic Brake handling logic Accelerator logic Error handling ( Consideration of error rates )
49
SET OF INPUTS NEEDED :
1 Speed 2 Condition of road ( Fuzzy / Crisp ) 3 Traffic light – Characteristics 4 Variation in moving obstacles 5 Labels ( Type of vehicles / objects ) 6 Alignment of vehicle on road ( Fuzzy / Crisp ) 7 Location of each object at each instance of time within a specified distance
from car ( Assumptions made for width of road )
OPERATIONS
1. Data De-duplication
› Identification of amount of data
› Pruning
› Storage aspects 2. Dimensionality Reduction
› Identification of correlation between features
› Pruning 3. Modeling aspects
› Steering
› Brake
› Accelerator 4. Performance Improvement
› Indices
› Comparison of Models Operation 1 to be taken care of later based on information received from Image processing and Radar and other groups Current set of operations done by batches – Operations 2 and 3. ROLE OF BATCH 1 : Collect inputs specified in Band perform pruning for B - 7, generate two sets of outputs ( Fuzzy ) and ( Crisp )as the case may be and store the same in a Matlab file. ROLE OF BATCH 2 : Read Mat lab file, take relevant Fuzzy inputs and (some crisp inputs if Needed) and perform the following operations:
50
Generate membership functions of each input and output Fuzzification Defuzzification Inference Aggregation Generate inferences Identify angle and speed of steering, brake, accelerator and horn ROLE OF BATCH 3 : Read Mat lab file, take only crisp inputs Choose the type of neural network ( Backpropagation / Feedforward ) Identify the training function Choose number of layers Choose the learning rate Generate crisp values of steering, break angle and accelerator On completion of the same, integration and conflict resolution of batches 2 and 3 to be carried out. Goals in Brake Model
Energy recuperation Wheel slip regulation Reduce torque tracking error ( may / may not be applicable ) Improve deceleration during emergency braking
Driver Trust in automated driving systems
› Aspects to be considered Peak driving speed Peak lateral distance between object and vehicle ( golf cart ) First steering input timing of human driver while overtaking or passing
Subjective / Objective evaluation
Steering level control Automated Driving system must maintain larger peak lateral distances and engage in steering maneuvers.
› Input aspects considered Distance of Golf Cart from other objects / labels
Change in distance
51
Alignment of vehicle on road Slippery Roads
Outputs to be generated for steering model control
› Steering angle
› Angular velocity
› Vehicle’s centre of gravity
› Slip angle
› Lateral acceleration
Computation of steering stability when angular velocity falls below 0./05 rad / sec
52
LOGIC FOR HORN
AUTOMATION INPUTS CONSIDERED
› LABELS
› DISTANCE OF VEHICLE ASCENDING LEVELS WITH DISTANCE INFORMATION
NEURAL NETWORK
OUTPUTS
Input B 7
Inputs B1 to B6
FUZZY CONTROLLER Pruning
PROCESSING
OUTPUTS
53
6.STATUS AS ON 19th
July 2019
The following is the status of various sub-systems: Ultrasonic Sensor- Dr. Ganesh Kumar Ultrasonic sensors are mounted on all the sides of the car for close proximity obstacle detection. One ultrasonic sensor is procured, interfaced with Embedded Processor and tested successfully. Design finalization andrealization of PCB to accommodate 6 sensors along with the Processor is under progress. InnertialMeasuremnt Unit (IMU) - Dr. Ganesh Kumar IMU is the sensor mounted on vehicle chasis for measuring the X,Y,Z coordinates of the vehicle, Turn Rate ,Speed etc. One IMU is procured, interfaced with Embedded Processor and tested successfully. Design finalization and realizationof PCB along with the Processor is under progress. Jetson Nano AI based platform – O.Madhu Is the hardware platform for realizing the Sensor ECU(Electronic Control Unit) and implementing the various Algorithms. 3 platforms have been procured and tested successfully. Algorithm development activity is in progress. RADAR/LIDAR Sensor -MsS.Latha It contains two RADARs for Forward & Rear Collision detection & warning. These sensors are interfaced to the Jetson Nano. The identified RADAR module has some constraints for procurement, alternative approach is being pursued. In the mean time detailed study and simulating the subsystem is in progress. Vision Sensor- Ms K. Siva Sundari It contains two Vision Sensors for Forward & Rear mounting. These sensors are interfaced to the Jetson Nano. The identified Camera module has some constraints for procurement, alternative
54
approach is being pursued. In the mean time detailed study and simulating the subsystem using available Camera module is in progress. Central Computer- Dr.Kranthi Kumar, Dr. Krishnaand, Mr. P.SuryaTeja&Dr.Vijaysaradhi. Jetson Xavier is the hardware platform for realizing the Engine Control Unit provides interface to HMI ,other ECU's and Network . Various drive control Algorithms & Navigation are implemented on this module. Procurement is under progress, algorithm & system software development is in progress on RaspberryPi. Algorithms Drive Control Algorithm – Dr. Krishnand The implementation and testing of the Drive Control algorithms is in progress,(about 75% completed). Three student groups are working on these algorithms.
Image Processing Algorithms – Dr. Vijaysaradhi, O.Madhu & Dr. Purnachander Rao The implementation and testing of the Drive Control algorithms is in progress, One student group is working on this algorithm. Software:Dr. Kranthi Kumar, Mr. P.SuryaTeja HMI software is implemented on RPi3 which can also be ported onto Jetson nano.