index [] · due to rapid progress ... another important fault in rotating machinery is...
TRANSCRIPT
INDEX
Sr.
No. Article Name Author
Page
No
1 Fault diagnosis of rotating machinery (part-a) Shinde Akshaykumar A. 1
2 Smart materials and structures Jadhav Abhijitkumar A.
5
3 Stress analysis of three-wheeler front fender
by experimental and finite element method Patil Abhijit V. 11
4 Use of nanotechnology in reduction of
friction and wear
Patil Harshwardhan H.
15
5 Polymeric composites for biomedical
implants Gurav Malikarjun M. 17
6
Entropy generation minimization: the new
thermodynamics of finite size devices and
finite time processes in heat transfer
Mane Ajit R. 21
7 Microgroove® technology Chendke Ghanashyam M. 25
8 Maintainability Rakate Ganesh N. 28
9 Burnishing process Patil H. G. 32
10 Superconductivity & discovery of helium Jyoti s. Jadhav 37
11 Production part approval process (ppap) Kiran I. Nargatti 40
12 Oscillating i. C. Engine Kiran J. Burle 44
13 Bose electromagnetic suspension Manoj M. Jadhav 46
14 Rapid prototyping systems: classification and
advantages Manojkumar M. Salgar 51
15 Trends in micro machining technologies Omkar R. Chandawale 55
16 Sensing artificial skin Pradeep B. Patil 65
17
Experimental study of single bubble
dynamics during nucleate pool boiling heat transfer using saturated water and ammonium
chloride
Prashant B. Pawar
68
Vision of the Institute
To be a leader in producing professionally competent engineers
Mission of the Institute
We at ADCET, Ashta are committed to achieve our vision by
Imparting effective outcome based education
Preparing students through skill oriented courses to excel in their profession with
ethical values
Promoting research to benefit the society
Strengthening relationship with all stakeholders
Department Vision
To be a leader in developing mechanical engineering graduates with knowledge, skills &
ethics.
Department Mission
We, at the Department of Mechanical Engineering are committed to achieve our vision by,
M1- Imparting effective outcome based education.
M2- Preparing students to serve the society with professional skills and ethical values.
M3- Cultivating skills and attitude among students and faculties to promote research
Programme Educational Objectives (PEOs)
1. Provide solutions to the problems of mechanical and relevant engineering disciplines
using the knowledge of fundamental science and skills developed during graduation
studies.
2. Demonstrate an understanding about selected specific areas of mechanical
engineering in career development.
3. Communicate and function effectively using professional ethics, social and
environmental awareness.
4. Engage in lifelong learning, for effective adaptation to technological changes.
Program Outcomes (POs):
Students of Mechanical Engineering Graduates will be able to:
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.
6. The engineer and society: Demonstrate understanding of contemporary knowledge of
engineering to assess societal, health, safety, legal and cultural issues and the consequent
responsibilities.
7. Environment and sustainability: Understand the impact of the professional engineering solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities, write
effective reports, make effective presentations, and give and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to manage projects and in
multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the ability to engage in
independent and life-long learning in the broadest context of technological change.
PSO1. An ability to find out, articulate the local industrial problems and solve with the use
of mechanical engineering tools for realistic outcomes.
PSO2. An ability of collaborative learning to find out cost-effective, optimal solution for
social problems
INFLUENCE May 2016
Department of Mechanical Engineering Page | 1
1. FAULT DIAGNOSIS OF ROTATING MACHINERY (PART-A)
Shinde Akshaykumar A.
Asst. Professor
The subject of fault diagnosis in rotating machinery is vast, including the diagnosis of
items such as rotating shafts, gears and pumps. The different types of faults that are observed in
these areas and the methods of their diagnosis are accordingly great, including vibration
analysis, model-based techniques and statistical analysis. It is not the intention to attempt to
provide a detailed coverage of all of these areas, since to do so would be beyond the scope of
this paper. Nevertheless, it is intended that the reader should be left with an understanding of the
wide range of topics involved, whilst detailed consideration is given to the subject of the fault
diagnosis of rotating shafts.
The literature on the subject of fault diagnosis is vast and wide-ranging, encompassing
such areas as general surveys, general system modeling and also methods applied to the fault
detection and isolation (FDI) of specific items of machinery, such as that found in land and
marine-based power plant and in aero-engines. Many kinds of FDI techniques can be applied in
different situations, including both static and dynamic processes, where the same method can
often be employed using different input and output parameters, depending on the system type.
The present day requirement for ever-increasing reliability in the field of rotor dynamics
is now more important than ever before and continues to grow constantly. Advances are
continually being made in this area, due largely to the consistent demand from the power-
generation and transportation industries. Because of progress made in engineering and materials
science, rotating machinery is becoming faster and lighter, as well as being required to run for
longer periods of time. All of these factors mean that the detection, location and analysis of
faults play a vital role in the field of rotor dynamics.
One of the major areas of interest in the modern-day condition monitoring of rotating
machinery is that of vibration. If a fault develops and goes undetected, then, at best, the problem
will not be too serious and can be remedied quickly and cheaply; at worst, it may result in
expensive damage and down-time, injury, or even loss of life. By measurement and analysis of
the vibration of rotating machinery, it is possible to detect and locate important faults such as
mass unbalance, shaft bow and cracked shafts and many more.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 2
1. Fault Diagnosis of Rotating Machinery
Since the analysis and design of rotating machinery is extremely critical in terms of the
cost of both production and maintenance, it is not surprising that the fault diagnosis of rotating
machinery is a crucial aspect of the subject, receiving ever more attention. Due to rapid progress
is made in technology as the design of rotating machinery becomes increasingly complex. So
condition-monitoring strategies become more advanced in order to cope with the physical
burdens being placed on the individual components of a machine.
1.1 Mass Unbalance
Balancing involves placing correction masses onto the rotating shaft, so that centrifugal
forces due to these masses cancel out those caused by the inherent unbalance mass, thus
cancelling out vibration. Since, in most cases, it is unlikely that an additional mass can be placed
directly in the same plane as the inherent unbalance, special planes, known as balance planes,
are often chosen specifically for the purpose of adding balancing weights, especially in larger
machines. Balancing is performed on both rigid and flexible rotors and specific methods have
been developed to deal with both cases. Rigid rotors are rotors that exhibit no significant
deformation, usually due to a low speed of rotation or a high diameter/length ratio. Conversely,
flexible rotors are rotors which undergo substantial deformation whilst in operation, due to their
long lengths and high operating speeds. Flexible rotors are often used for the generation of
electrical power, an area in which much work has been carried out in the area of balancing. The
two main types of balancing are the modal and influence coefficient methods. Modal balancing
is the procedure, whereby the unbalance forces at each mode considered are cancelled out
individually. Unbalance planes are chosen and the magnitudes of unbalance components are
determined, depending on the number of modes required by the system. Suitable unbalance
masses are then chosen to counteract the forces produced by the unbalance components at each
mode. Balancing by influence coefficients involves the selection of correction masses so that
vibration is reduced to zero at various, specified shaft locations for various, constant shaft
speeds. The actual influence coefficient is the response of the shaft at a given axial point due to a
force acting at another (or the same) point along the length of the shaft.
1.2 Bent Shafts
Bends in shafts may be caused in several ways, for example due to creep, thermal
distortion or a previous large unbalance force. The forcing caused by a bend is similar, though
slightly different, to that caused by conventional mass unbalance. There have been numerous
cases in industry where vibration has been assumed to have arisen from mass unbalance and
INFLUENCE May 2016
Department of Mechanical Engineering Page | 3
rotors have been balanced using traditional balancing procedures. This has repeatedly left
engineers puzzled as to why vibration persists after balancing and vibration levels may indeed
even be worse than before balancing took place. In summary, shaft bow response is a function of
shaft speed and causes different amplitude and phase angle relationships than is found with
ordinary mass unbalance, which is a function of the square of the speed. It is important to be
able to diagnose shaft bow from vibration measurements and thus distinguish between it and
mass unbalance.
1.3 Cracked Shafts
Another important fault in rotating machinery is cross-sectional crack propagation in a
shaft, generally induced by fatigue. Although the problem has not been addressed in as much
detail as mass unbalance, a considerable amount of research has been undertaken in this area.
Vibration is caused by the presence of a crack and it is found, in general, that the vibration
amplitude is dependent on the depth and shape of the crack and on the position of the crack in
relation to the shaft mode shape. It is therefore important to be able to diagnose cracks early on
in order to carry out any necessary maintenance, before more damage than is necessary is caused
by this excess vibration. The presence of a crack causes asymmetry in the stiffness of the rotor,
which is difficult to detect under normal running circumstances using customary monitoring
techniques. Another difficulty is encountered when examining a "breathing" crack, which is a
crack that opens and closes as the shaft rotates. This means that the shaft stiffness properties
actually vary during the course of one revolution, as the crack opens and closes.
1.4 Rubs
Rubs are produced when the rotating shaft comes into contact with the stationary
components of the machine. Rubs are said to be accompanied by a great deal of high-frequency
spectral activities. A rub is generally transitory phenomenon. Rub may typically be caused by
mass unbalance, turbine or compressor blade failure, defective bearings and/or seals, or by rotor
misalignment, either thermal or mechanical. Several different physical events may occur during
a period of contact between rotor and stator: initial impacting stage, frictional behaviour
between the two contacting parts and an increase in the stiffness of the rotating system whilst
contact is maintained, to name just three. The behaviour of the system during this period is
highly non-linear and may be chaotic. The rotor-to-stator rub is one of the malfunctions
occurring often in rotating machinery. It is usually a secondary phenomenon resulting from
other faults. When the rub occurs, partial rub can be observed at first. During one complete
period, rotor and stator have rub and impact interaction once or a fewer times. Alternately
INFLUENCE May 2016
Department of Mechanical Engineering Page | 4
changed stress is formed in the shaft and the system can exhibit complicated vibration
phenomena. Chaotic vibration can be found under some circumstances. Gradual aggravation of
the partial rub will lead to full rub and severe vibration makes the normal operation of the
machine impossible. Mathematically, the rotor system with rotor-to-stator rub is a nonlinear
vibrating system with piecewise linear stiffness.
Conclusion
This article has provided a broad review of the state of the art in fault diagnosis
techniques, with particular regard to rotating machinery. More Fault Diagnosis techniques will
be discussed in further papers. By studying these one can analyze the faults and operating
tolerances will become tighter and the cost of shutting down a machine for maintenance will
tend to be assessed against the cost of failure and lost production.
References
1. Kirk, R.G., 1984, "Insights from Applied Field Balancing of Turbomachinery,"
Proceedings of the Institution of Mechanical Engineers - Vibrations in Rotating
Machinery, pp. 397-407.
2. Parkinson, A.G., 1991, "Balancing of Rotating Machinery," Proceedings of the Institution
of Mechanical Engineers Part C - Mechanical Engineering Science, Vol. 205, pp. 53-66.
3. Nicholas, J.C., Gunter, E.J., and Allaire, P.J., 1976a, "Effect of Residual Shaft Bow on
Unbalance Response and Balancing of a Single Mass Flexible Rotor Part 1 - Unbalance
Response," Journal of Engineering for Power, Vol. 98, pp. 171-181.
4. Nicholas, J.C., Gunter, E.J., and Allaire, P.E., 1976b, "Effect of Residual Shaft Bow on
Unbalance Response and Balancing of a Single Mass Flexible Rotor Part 2 - Balancing,"
Journal of Engineering for Power, Vol. 98, pp. 182-189.
5. Salamone, D.J., and Gunter, E.J., 1980, “Synchronous Unbalance Response of an
Overhung Rotor with Disk Skew”, Journal of Engineering for Power, Vol. 102, pp. 749-
755.
6. Wauer, J., 1990, "On the Dynamics of Cracked Rotors: A Literature Survey," Applied
Mechanics Reviews, Vol. 43(1), pp. 13-17.
7. Mayes, I.W., and Davies, W.G., 1976, "The Vibrational Behaviour of a Rotating Shaft
System Containing a Transverse Crack," Proceedings of the Institution of Mechanical
Engineers - Vibrations in Rotating Machinery, pp. 53-64.
8. McFadden, P.D., and Smith, J.D., 1984, "Model for the Vibration Produced by a Single
Defect in a Rolling Element Bearing," Journal of Sound and Vibration, Vol. 96, pp. 69-92.
INFLUENCE May 2016
2. Smart Materials and Structures
Jadhav Abhijitkumar A.
Asst. Professor
The demand for new generations of industrial, military, commercial, medical,
automotive and aerospace products has fuelled research and development activities focused on
advanced materials and smart structures. This situation has been further stimulated by the
intellectual curiosity of humankind in synthesizing new classes of bio-mimetic materials. And,
of course, global competition among the principal industrial nations has also been a parameter in
the equation governing the rate of technological progress. A fundamental axiom of this field of
advanced materials is that the ultimate materials are the biological materials which replicate
such characteristics and properties in synthetic materials and which can be employed in diverse
scientific and technological applications. Thus, by integrating the knowledge bases associated
with the mega-technologies of advanced materials, information technology and biotechnology,
the creation of a new generation of biomimetic materials and structures can be facilitated, with
inherent brains, nervous systems and actuation systems –this is at present a mere skeleton
compared with the anatomy perceived in the not-too-distant future. This quantum jump in
materials technology will revolutionize the future in ways far more dramatic than the way the
electronic chip has impacted on our lifestyles.
Fig. [a] Typical active feedback system
These new materials are termed Smart Materials or Intelligent Materials and they will
typically feature fibrous polymeric composite materials, embedded with powerful computer
chips of gallium arsenic which will be interfaced with both embedded sensors and embedded
actuators by networks of embedded optical-fibre wave-guides, through which large volumes of
data will be transmitted at high speeds. Today’s material revolution is the cornerstone of the
triumvirate of mega-technologies, which comprise the essential, integrates of this embryonic
field. These technologies will have a mutually symbiotic relationship and will significantly
INFLUENCE May 2016
Department of Mechanical Engineering Page | 6
impact on one another resulting in synergistic technological advances which cannot be foreseen
today. However, a natural consequence of advancing on these technological disciplines will be
the impending revolution in smart materials and structures.
The classes of smart materials and intelligent structures are diverse and the applications
of them are largely unknown. However, what is known is that this new generation of materials
will certainly revolutionize our quality of life as dramatically as the state-of-art materials did in
the past, with stone implements triggering the Stone Age, alloys of copper and tin triggering the
Bronze Age, and the smelting of iron ore triggering the Iron Age. The time-line of humankind is
located at the dawn of a new age, The Smart Materials Age.
1. Smart Material Age
Human civilization has been so profoundly influenced by materials technologies that
historians have defined time periods by the materials that dominated during these eras. Thus, as
humankind embarked upon the continual quest for superior products and weaponry fabricated
from superior materials, terms such as the Stone Age, the Bronze Age, and the Iron Age have
entered the vocabulary. The current Synthetic Materials Age featuring plastics and fibrous
composites is providing a viable precursor to the dawn of a new era, the Smart materials Age,
which will capitalise on these synthetic materials in order to exploit several eclectic emerging
technologies for the synthesis of smart materials exhibiting nervous systems, brains, and
muscular capabilities. The degree of sophistication displayed by this new generation of materials
will depend mostly on the individual applications; however, it is anticipated that various
innovations in diverse fields of science will emerge, such as nanotechnology, biomimetics,
neural networking, artificial intelligence, materials science, and molecular electronics, for
example.
This new generation of smart materials will significantly impact on civilization. For
example, some classes of materials will be able to select and execute specific functions
autonomously in response to changing environmental stimuli; others will only feature embedded
sensory capabilities in order that a structural member is manufactured to comply with the quality
control specifications. Self-repair, self-diagnosis, self-multiplication and self-degradation are
also some of the characteristics anticipated to be a feature of the supreme classes of smart, or
intelligent, materials in an engineering context that all aspects of civilization will be influenced
by these new generations of innovative materials as designers capitalise on their unique
capabilities in industries as diverse as aerospace, manufacturing, automotive, sporting goods,
medicine, semi-conductive technology and civil engineering.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 7
2. Smart Structures and Development Background
Smart structures or smart materials systems are those which incorporate actuators and
sensors that highly integrate into the structures and have structural functionality, as well as
highly integrated control logic, signal conditioning, and signal power amplification electronics.
Such actuating, sensing and controlling are incorporated into a structure for the purpose of
influencing its states or characteristics, be they mechanical, thermal, optical, chemical, electrical,
or magnetic. For example, a mechanically smart structures is capable of altering either its
mechanical states (its position or velocity) or its mechanical characteristics (its stiffness or
damping). Optically smart structures, for example, could change colour to match its background.
Three historical trends have combined to establish the potential feasibility of smart structures.
The first is a transition to laminated materials. In the past, structures were manufactured from
large pieces of monolithic materials which were machined, forged, or formed to a final structural
shape, making it difficult to imagine the incorporation of active elements. However, in the past
30 years, a transition to laminated materials technology has occurred. Laminated materials,
which are built up from smaller constitutive elements, allow for the easy incorporation of active
elements within the structural from. One can now envision the incorporation of a smart ply
carrying actuators, sensors, processors, and inter-connections within the laminated materials.
The second trend has been the exploitation of the off-diagonal terms in the material
constitutive relations, which currently enables smart structures. The full constitutive relations of
materials include characterization of its mechanical, optical, electromagnetic, chemical,
physical, and thermal properties. For the most part, researchers have focused only on block
diagonal terms. Those interested in exploiting a material for its structural benefits have focused
only on the mechanical characterization. However, much can be gained by exploiting the off-
diagonal terms in the constitutive relations, which, for example, couple the mechanical and
electrical properties. The characterization and exploitation of these off-diagonal material
constitutive relations has led to the progress in the creation of smart structures.
The third and perhaps most obvious advance come in the electrical engineering and
computer science disciplines. These include the development of microelectronics, bus
architectures, switching circuitry, and fibre-optic technology. Also, central to the emergence of
smart structures is the development of information processing, artificial intelligence, and control
disciplines. The sum of these three evolving technologies (the transition to laminated materials,
the exploitation of the off-diagonal terms in material constitutive relations and the advance in
microelectronics) has created the enabling infrastructure in which smart structures can develop.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 8
3. Smart Materials
The technological field of “smart materials” is not transparent or clearly structured. It has
evolved over the past decades with increasing pace during the 1990s to become what it is today,
at the transition to the next millennium. Generally speaking these materials respond with a
change in shape upon application of externally applied driving forces. Typically this shape
change is reflected in an elongation of the sample, thus allowing the use as e.g. a small linear
motor.
The term “smart materials” sometimes also called intelligent materials or active materials
describes a group of material systems with unique properties. At this stage, following materials
are the active ones:
Piezoelectric materials
Shape memory alloys
Magnetostrictive materials
Electrorheological materials
Magnetorheological fluids
In addition some material systems that do not exhibit a shape change, but rather have
other significant properties are sometimes also called smart materials (while other, for some
strange reason is not). Examples of ”other” smart materials include electro- and magneto
rheological fluids. These fluids can change viscosity over many orders of magnitude upon
application of an external magnetic or electric field.
Consequently, the term “smart materials” is not very well defined and frequently used to
describe different systems and systems behaviors. Although there have been approaches to
quantify and classify different levels of smartness or intelligence in systems, from a practical
standpoint it is most important to understand that none of the classifications is established and
used as a standard in the academic, scientific, or industrial community. Furthermore one should
note that the terms:
Smart materials
Intelligent Materials
Active Materials
INFLUENCE May 2016
Fig [b] Typical actuators working principle.
Adaptive Materials and to some extent “actuators” and “sensors” are almost always used
interchangeably. This can sometimes lead to confusion as different terms can really describe the
same effect or property of a material. To add to the confusion the terms “smart devices”, “smart
systems” or “smart structure” are often carelessly used. Here one should note that in general the
system complexity increases from the unit “material” to “device” to “systems” to “structures”.
Any permutation of the adjective (smart, active) with the subject (material, device,…) is more or
less meaningful and seems to have been used already in one way or the other in published
reports and papers. Much more important than the actual word definition is the general
understanding of the field.
4. Smart Structures
The term “smart structure” is more commonly applied to a super-system where
intrinsically adaptive materials are employed. Smart structures are the structures made of smart
materials, in other words, are those which incorporate actuators and sensors that are integrated
into the structure and have structural functionality, as well as integrated control logic, signal
conditioning and power amplification electronics. Such actuating, sensing and signal processing
elements are incorporated into a structure for the purpose of influencing its states or
characteristics, be they mechanical, thermal, optical, chemical, electrical or magnetic. For
example, a mechanically intelligent structure is capable of altering both mechanical states, i.e. its
position or velocity, or its mechanical characteristic, i.e. its stiffness or damping. An optically
intelligent structure could, for example, change colour to match its background. The truly
intelligent structural system learns and adapts its behavior in response to the external stimulation
provided by the environment in which it operates. However there is a wide variety of less
sophisticated smart materials and structures which exploit the basic sub-disciplines, which
INFLUENCE May 2016
Department of Mechanical Engineering Page | 10
defines three classes of smart materials. These include materials with only sensing capabilities,
those with only actuation capabilities and those with both sensing and actuation capabilities, at
primitive level relative to notions of intelligence.
5. Critical Component Technologies of Smart Structures
In the context of intelligent materials there is considerable focus on sensors and actuators
and control capabilities. The current generation of smart materials and structures incorporate one
or more of the following features:
Sensors: which are either embedded within a structural material or else bonded to the surface of
that material. Alternatively, the sensing function can be performed by a functional material,
which, for example, measures the intensity of the stimulus associated with a stress, strain, and
electrical, thermal, radioactive, or chemical phenomenon. This functional material may, in some
circumstance, also serve as a structural material.
Actuators: which are embedded within a structural material or else, bonded to the surface of the
material. These actuators are typically excited by an external stimulus; such as electricity in
order to either changes their geometrical configuration or else changes their stiffness and
energy-dissipation properties in a controlled manner. Alternatively, the actuator function can be
performed directly by a hybrid material, which serves as both a structural material, and also as a
functional material.
Control capabilities: which permit the behavior of the material to respond to an external
stimulus according to a prescribed functional relationship or control algorithm. These
capabilities typically involve one or more microprocessors and data transmission links, which
are based upon the utilization of an automatic control theory.
To get a better understanding of the active materials field it is appropriate to introduce an
approach to classify different smart materials. Ideally the classification should be collectively
exhaustive and mutually exclusive. The most common way of structuring is by looking at the
input and the output of a material system as illustrated in Figure [b]. The input or stimulus can
be for example a change in temperature or in magnetic field. The material then intrinsically
responds with an output, which in turn can be for example a change in length of the material,
change in viscosity or change in electrical conductivity. Active materials can be divided into two
groups. One group comprises the classical active materials as viewed by the academic
community and is characterized by the type of response these materials generate. Upon
INFLUENCE May 2016
Department of Mechanical Engineering Page | 11
application of a stimulus the materials respond with a change in shape and/or in length of the
material[b].
Thus input is always transformed into strain, which can be used to introduce motion or
dynamics into a system. These materials are the most widely used group for design of smart
structures, where active materials are integrated into a mechanical host structure (for example a
building or a helicopter rotor blade) with the goal to change the geometrical dimensions of the
structures. The desired change in geometrical dimensions is mostly time dependant and often the
steady state of the structure is a dynamic system where integrated active materials or devices are
constantly agitated to change in real time the characteristics of the host. Devices based on
materials that respond with a change in length are often referred to as actuators or solid state
actuators to be more specific. Conversely active materials can be also used as sensors where a
strain applied on the material is transformed into a signal that allows computation of the strain
levels in the system.
The second group consists of materials that respond to stimuli with a change in a key
material property, for example electrical conductivity or viscosity. While they are equally
important from a scientific point of view, they are less frequently integrated into mechanical
structures but rather used to design complex modules, for example clutches, fasteners, valves or
various switches. Frequently these materials are used as sensors. Although materials in this
group do not produce strain upon application of an external stimulus they are sometimes also
referred to as actuator systems. Examples include the electro- and magneto rheological fluids,
which respond with an increase in viscosity upon application of an external electrical or
magnetic field.
References
1 Crawley E F and Anderson E H, Detailed models of piezoelectric actuation of beams, J.
Intell. Mater. Syst. Struct., 1 4-25, 1990.
2 Wada B K, Fanson J L, and E F Crawley, Adqptive Structures, Journal of Intelligent
Material Systems and Structures,Vol 1 No.2 pp 157-174, 1990.
3 Gandhi, M.V. and Thompson, B S., Smart Materials and Structures, Chapman and Hall,
1992.
4 Crawley E F, Intelligent Structures for Aerospace: A Technology Overview and
Assessment, AIAA Journal, Vol.32 No.8, pp1689-1699, August, 1994.
5 Vijay K. Varadan, et al, Smart Material Systems and MEMS, JohnWiley and Sons Ltd,
2006.
6 Bohuna Sun, Cape Peninsula University of Technology, Research Gate, RESEARCH,
SEPTEMBR 2015.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 12
3. Stress analysis of three-wheeler front fender by experimental and
finite element method
Patil Abhijit V. Asst. Professor
India is characterized by its rising population, urbanization, motorization and low per-
capita income. The largest cities have grown especially fast. This rapid growth of India’s cities
has generated a corresponding growth in travel demand and increased levels of motor vehicle
ownership and use. As Indian cities have grown in population, they have also spread outward. A
lack of effective planning and land use controls has resulted in rapid extension beyond old city
boundaries. This greatly increased the number and length of the trips for most Indians. As a
result of this motorized transport is increased to the great extent.
1. Purpose of the Fender in automobile
The Fender is the term for the part of an automobile, motorcycle or other vehicle body
that frames a wheel well i.e. the fender underside. Primary purpose of the fender is to prevent
sand, mud, rocks, liquids, and other road spray from being thrown into the air by the rotating
tire. Sticky materials such as mud may adhere to the smooth outer tyre surface, while smooth
loose objects such as stones can become temporarily embedded in the tread grooves as the tyre
rolls over the ground. These materials can be ejected from the surface of the tyre at high velocity
as the tyre imparts kinetic energy to the attached objects. For a vehicle moving forward, the top
of the tyre is rotating upward and forward, and can throw objects into the air at other vehicles or
pedestrians in front of the vehicle
Certain types of cars with narrow bodies use fenders for their resemblance to those used
on bicycles. They are attached to the wheel suspension and remain at a fixed distance from the
tyre regardless of wheel motion, and can therefore be much closer to the tyre than fixed wheel
wells. This was popular on early Classic Trials cars because the fenders were lightweight and
allowed for a thin streamlined body. They persist on cars wanting a "vintage" look.
Various materials are used for fenders based on the strength and life requirements of it
and to meet this, different manufacturing methods are used with respect to material used. More
preferred materials are sheet metal, plastic and fiber reinforced plastic, thermoplastic polymer
(Polypropylene Copolymer- PPC). Polypropylene Copolymer preferred because of its light
weight characteristic but strength is a problem over sheet metal fender. While designing the
fender following factors are considered.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 13
The fender should provide sufficient cover to the wheel and suspension linkages. It
should have sufficient strength to withstand loads and vibration under all operating conditions.
2. Comparison of result for front loading condition
Load
(kg)
Stress in X direction
x (N/mm2)
Stress in Y direction
Y (N/mm2)
Stress in Z direction
z (N/mm2)
FEA Experimental FEA Experimental FEA Experimental
20 0.294 0.2829 -7.2458 -7.2500 0.8236 0.8085
40 0.644 0.6427 -15.095 -15.2427 1.5944 1.5983
60 Failure of Fender
From comparison of results it can be seen that experimental results are nearly in
agreement with FEA results in front loading condition. Experimental values are slightly smaller
than the FEA values. For the front loading condition maximum stress recorded is 1.5983 in
experiment which is slightly greater than 1.5944 obtained in FEA when load of 40 kg is applied.
However minimum stress value is recorded 0.2829 in X direction which is also slightly smaller
than 0.294 obtained by FEA. Negative sign in Y directional stress values shows compressive
nature of the stresses induced. The fender fails to sustain load at 60 kg.
Conclusion
Present study helped in stress analysis of three wheeler front fender by experimental and
finite element methods as well as topology optimization techniques. Old design of the fender
functionally satisfies only general requirements of a mudguard. However maintenance reports
and sales review reported that fender is abused by servicemen for repair and maintenance work
such as tyre puncture. Therefore new loading conditions wherein side loading and front loading
are incorporated in the design of the fender
By several physical tests conducted using strain gauge technique it is observed that the
fender fails at 60 Kg load in front loading condition whereas it is safe for 60 kg in side loading
condition. The research methodology followed is authenticated after experimental analysis. The
design and analysis methodology adopted here for the front fender can be adapted to design and
analysis of similar automotive component
INFLUENCE May 2016
Department of Mechanical Engineering Page | 14
References
1 Rafat Ali, “Finite Element Study of a Composite Material Sump Pan of an I.C. Engine”,
SAE Paper No.950942, 1995.
2 Basil Housari, Lian X. Yang, “Experimental Techniques for Strain Measurement and
Validation of CAE Model”, SAE Paper No.2005-01-0587, 2005.
3 Muniyasamy K., Govindrajan R., Jayram N., Ravi kharul, “Vibration Fatigue Analysis of
Motorcycle Front Fender” , SAE Paper No.2005-32-0030, 2005.
4 K. Bel Knani, P. Bologna, E. Duni, G.Villari, G.Armando, M Tortone, M Leghissa, S
Borone, “ CAE Methodologeis for Virtual Prototyping of Cast Aluminum Suspension
Components” , SAE Paper no. 2002-01-0677, 2002.
5 Mohammed M Ansari, “Validation of Finite Element (FE) Model for All Radiator End
Tank”, SAE Paper No.2002-01-0951,2002.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 15
4. Use of Nanotechnology in Reduction of Friction and Wear
Patil Harshwardhan H.
Asst. Professor
Nanotechnology enhances the revolutionary technological changes and the research
emphasis which are directly related to improvement of nanopolymer composites as lubricants.
The polymeric materials have to exhibit good abrasion and wear resistance by mechanical
strength, lightness, ease of processing, versatility and low cost, together with acceptable thermal
and environmental resistances which are suitable for tribological applications. The
viscoelasticity of polymeric materials demerit this target and make the analysis of the
tribological features and the processes involved in such phenomena quite complicated. Hence by
accumulating miniature inorganic particles in the polymer matrices the mechanical properties
can be effectively enhanced. The development of composite materials is derived from a
combination of properties possess a high stiffness, toughness and wear resistance which are
essentially valid for tough operational conditions as slide bearings. Apart from this the
reinforcing effect depends on composite materials which are strongly affected by the
microstructure represented by the filler size, shape, homogeneity of distribution/ dispersion of
the particles within the polymer, and filler/matrix interface extension, plays a crucial role which
cannot be achieved by either components alone since composite material is derived from a
combination of properties. The improvement of nanocomposites showing high tribological
features requires a deep investigation on their microto- nanostructure, aiming to find synergistic
mechanisms and reinforcement effects exerted by the nanofillers.
During recent years, nanocomposites have fashioned much research interest due to
notable enhancements in the various composite properties at very low volume fractions. In
emphasizing the area of miniaturization, factors such as materials surface characteristics are
particularly important, since very low adhesion and friction forces must be achieved here.
Introducing nanoparticles in lubricants is a complex task because of size, shape, concentration
and of course the materials itself are all very important factors to influence the lubrication
performance of a specific system. The significant parameters in lubricants is the concentration of
nanoparticles, since both wear scar diameter and friction coefficient were found to be dependent
on the concentration of additives. Heat is produced as a result of friction, such that lubricants are
required to have higher temperature degradation points. Addition of inorganic nanoparticles
significantly improves their lifetime and performance, although nanomaterials tend to manifest a
sharp decrease in their melting point at around or below 50 nm. To improve the friction and
wear behavior for a range of systems, low concentration of less than 2wt% was found to be
INFLUENCE May 2016
Department of Mechanical Engineering Page | 16
optimal where the addition of nanoparticles of more than 1% was not achieved in any studies on
different fractions of nanoparticles in various circumstances and more damage can be found in
High concentrations easily.
References
1 Hu, J. J.; Jo, S. H.; Ren, Z. F.; Voevodin, A. A.; Zabinski, J. S. Tribol.Lett. 2005, 19, 119–125.
2 Dickrell, P. L.; Pal, S. K.; Bourne, G. R.; Muratore, C.; Voevodin,A.; Ajayan, P. M.; Schadler, L. S.; Sawyer, W. G.; Tribol. Lett 2006, 24, 85–90.
3 Miyoshi, K.; Street, K. W., Jr.; Vander Wal, R. L.; Andrews, R.; Sayir, A. Tribol. Lett. 2005, 19, 191–201.
4 Mouritz AP, Gibson AG. Fire Properties of Polymer Composite Materials Pub Springer, 2006
5 Hull TR, Kandola BK. Fire Retardancy of Polymers New Strategies and Mechanisms RSC 2009
6 Horrocks AR, Price D, Fire retardant materials, Woodhead Publishing ISBN 1 85573419 2
INFLUENCE May 2016
Department of Mechanical Engineering Page | 17
5. Polymeric Composites for Biomedical Implants
Gurav Malikarjun M.
Asst. Professor
An implant is a medical device manufactured to replace a missing biological structure,
support a damaged biological structure, or enhance an existing biological structure. Medical
implants are .man-made devices, in contrast to a transplant, which is a transplanted biomedical
tissue. Materials used for implant manufacture play an important role in implant fixation.
Biocompatibility and biomechanical properties are important variables that need to be
determined when new materials are considered for medical use. The first materials used in
orthopedic applications were metals such as stainless steel and cobalt-chrome-based alloys.
These materials offer the benefits of strength, corrosion resistance, ease of machining, and low
cost. Additionally, stainless steel-based plates can be manually bent or contoured to fit
individual fracture sites. In the past few decades, the use of titanium alloys has gained
popularity, as this material has a potential for osseointegration and a modulus of elasticity that
closely matches bone. However, disadvantages of metal implants include limited fatigue life,
mismatch of modulus of elasticity, potential for generation of wear debris, cold-welding seen
with titanium locking screw constructs, corrosion, and radiodensity that can preclude accurate
radiographic visualization of fracture reduction, healing, and tumor or infection progression or
resolution. Due to very high Young’s modulus they cause stress shielding which result in bone
loss around the teeth. More over aesthetic wise also these metallic materials have limitations.
This led researchers to investigate non metallic materials as a replacement to metals. Polymers
and polymer composites possess a wide spectrum of properties that allow them to be used in
diverse medical applications. In general, after implantation of a biomaterial, two possible tissue
responses can take place. If a fibrous tissue forms between the implant and the bone, the implant
fails. However, if a direct intimate bone-implant contact forms, the implant is said to be
integrated into the bone. For proper functioning of the implant the material should posses certain
surface properties so that the bone can grow around it.
Conventional polymeric biomaterials include poly(2-hydroxyethyl methacrylate) HEMA, poly
(acrylic acid) PAA, poly (ethylene oxide) PEO, poly(vinyl alcohol) PVA etc. Many of these
polymers have been used as homo-polymer and also as co-polymers. These polymers have
excellent biocompatibility as they do not interact with specifically biological systems. However
polymers have their own inherent limitations. For example the mismatch in Young’s modulus
between implant material and bone, and related over or under loading of bone, has been a major
concern in prosthetic application in poor bone conditions. To overcome this problem, attempts to
INFLUENCE May 2016
Department of Mechanical Engineering Page | 18
investigate non-metallic fiber reinforced composite (FRC) for such applications have been
made. Such considerations lead to the hypothesis that FRC implants would obtain the properties
comparable to those of the bone, in particular stiffness, which allows uniform load distribution
to the surrounding bone tissue. One emerging FRC as biomaterials specifically for dental
implants is Carbon-Fiber-Reinforced Polyetheretherketone (CFR-PEEK). Polyetheretherketone
(PEEK) is an organic synthetic polymeric tooth coloured material which has the potential to
serve as an aesthetic dental implant material. It has excellent chemical resistance and
biomechanical properties. In its pure form, Young’s modulus of PEEK is around 3.6 GPa.
Meanwhile, Young’s modulus CFR-PEEK is around 18 GPa which is close to that of cortical
bone. Hence, it has been suggested that PEEK could exhibit lesser stress-shielding when
compared to titanium.
Fig.1 (A) lateral (B) radiographs and coronal (C) and sagittal (D) magnetic resonance imaging
showing implants.
This said, PEEK is a bioinert material and it does not possess any inherent
osseoconductive properties. Osseoconduction is the process by which osteogenesis is induced. It
is a phenomenon regularly seen in any type of bone healing process. Osteoinduction implies the
recruitment of immature cells and the stimulation of these cells to develop into preosteoblasts.
PEEK can be coated and blended with bioactive particles to increase the osseoconductive
properties and surface roughness. However, high temperatures involved in plasma-spraying can
deteriorate PEEK. Furthermore, thick calcium phosphate coatings on PEEK can delaminate
because of their limited bond strength when compared to coated titanium implants.
Additionally, combining PEEK with particles in the size range of micrometers leads to
mechanical properties falling inferior to those of pure PEEK or CFR-PEEK. Therefore, more
recently, a significant amount of research has been conducted to modify PEEK by coating or
blending it with nanosized particles and producing nanolevel surface topography. CFR-PEEK
INFLUENCE May 2016
Department of Mechanical Engineering Page | 19
implants can be engineered to have a varying degree of strength and stiffness, based on the
orientation and number of carbon fiber layers. This can permit manufacture of an implant that is
more compliant than metal, better matching the modulus of elasticity of bone. Finally, the
radiolucency and magnetic resonance imaging (MRI) compatibility of CFR-PEEK are 2
additional characteristics that make this material beneficial. Fracture reduction and healing can
be readily assessed with standard radiographs. The absence of artifact on both computed
tomography (CT) and MRI means that CFR-PEEK has applications for the spine, for infection,
and for oncologic cases. Medical implants are shifting more and more from passive, bioinert
parts to bioactive, resorbable and cell growth managing components. Bioactive particles can be
incorporated into PEEK to produce bioactive implants. Hydroxyapatite is a bioceramic with
chemistry similar to bone and it is shown to induce bone formation around implants.
Hydroxyapatite particles (HAp) of the micrometer size range have been melt-blended with
PEEK producing PEEK-HAp composites but these could be very difficult to be used as dental
implants because of the poor mechanical properties produced due to the insufficient interfacial
bonding between PEEK and hydroxyapatite particles. Other applications of CFR-PEEK include
uses in spine surgery, orthopaedic trauma, and musculoskeletal oncology and to treat
musculoskeletal infections.
Fig.2 Commercially available carbon-fiber-reinforced polyetheretherketone intramedullary
nails.
CFR-PEEK implants offer several benefits over traditional metal implants. Their
mechanical properties, including high fatigue strength and low modulus of elasticity, are
advantageous for use in orthopedic surgery. Their radiolucent property allows improved
imaging, which is useful for assessing healing or in situations in which an artifact-free CT or
MRI scan is required. Although commonly used as material for spinal interbody fusion implants,
their use in other areas of orthopedics is now being explored. Future clinical studies will help
INFLUENCE May 2016
Department of Mechanical Engineering Page | 20
define their usefulness and role in other areas of orthopedic surgery. Several unresolved issues
remain. First, translational research is needed to confirm that the low Young’s modulus of CFR-
PEEK improves callus formation and provides quicker time to union in fracture cases. Second,
clarification is needed regarding the clinical benefits of using intramedullary nails as shown in
figure 2 are coated with antibiotics in relation to the ability to monitor bone infection
progression or regression via MRI. Third, clinical studies must be performed verifying the direct
benefits of radiolucency of the material for the quality of fracture reduction.
References
1 Suzuki T, Smith WR, Stahel PF, Morgan SJ, Baron AJ, Hak DJ. Technical problems and
complications in the removal of the less invasive stabilization system. J Orthop Trauma.
2010; 24:414–419.
2 Cooper HJ, Della Valle CJ, Berger RA, et al. Corrosion at the head-neck taper as a cause
for adverse local tissue reactions after total hip arthroplasty. J Bone Joint Surg Am. 2012;
94(18):1655–1661.
3 Toth JM, Wang M, Estes BT, et al. Polyetheretherketone as a biomaterial for spinal
applications. Biomaterials. 2006; 27:324–334.
4 Tayton K, Johnson-Nurse C, McKibbin B, Bradley J, Hastings G. The use of semi-rigid
carbon-fibre-reinforced plastic plates for fixation of human fractures: results of
preliminary trials. J Bone Joint Surg Br. 1982; 64:105–111
5 Pemberton DJ, McKibbin B, Savage R, Tayton K, Stuart D. The use of carbon-fibre
reinforced plates for problem fractures: results of preliminary trials. J Bone Joint Surg Br.
1992; 74:88–92.
6 Wasko MK, Borens O. Antibiotic cement nail for the treatment of posttraumatic
intramedullary infections of the tibia: midterm results in 10 cases. Injury. 2013; 44:1057–
1060.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 21
6. Entropy generation minimization: The new thermodynamics of
finite size devices and finite time processes in heat transfer
Mane Ajit R.
Asst. Professor
Abstract: Entropy generation minimization (finite time thermodynamics or thermodynamic
optimization) is the method that combines into simple models the most basic concepts of heat
transfer, fluid mechanics, and thermodynamics. These simple models are used in the
optimization of real (irreversible) devices and processes, subject to finite-size and finite-time
constraints. The review traces the development and adoption of the method in several sectors of
mainstream thermal engineering and science: cryogenics, heat transfer, education, storage
systems, solar power plants, nuclear and fossil power plants, and refrigerators. Emphasis is
placed on the fundamental and technological importance of the optimization method and its
results, the pedagogical merits of the method, and the chronological development of the field.
Objective:
Entropy generation minimization (EGM) is the method of modeling and optimization of real
devices that owe their thermodynamic imperfection to heat transfer, mass transfer, and fluid flow
irreversibilities. It is also known as ‘‘thermodynamic optimization’’ in engineering, where it was
first developed, or more recently as ‘‘finite time thermodynamics’’ in the physics literature. The
method combines from the start the most basic principles of thermodynamics, heat transfer, and
fluid mechanics, and covers the interdisciplinary domain pictured in Fig. 1. The most exciting
and promising interdisciplinary aspect of the method is that it also combines research interests
from engineering and physics.
The objectives of the optimization work may differ from one application to the next, for
example, minimization of entropy generation in heat exchangers, maximization of power output
in power plants, maximization of an ecological benefit, and minimization of cost. Common in
these applications is the use of models that feature rate processes (heat transfer, mass transfer,
and fluid flow), the finite sizes of actual devices, and the finite times or finite speeds of real
processes. The optimization is then carried out subject to physical constraints that are in fact
responsible for the irreversible operation of the device. The combined heat transfer and
thermodynamics model ‘‘visualizes’’ for the analyst the irreversible nature of the device. From
an educational standpoint, the optimization of such a model gives us a feel for the otherwise
INFLUENCE May 2016
abstract concept of entropy generation, specifically where and how much of it is being
generated, how it flows, and how it impacts thermodynamic performance.
The method
The critical new aspect of the EGM method—the aspect that makes the use of thermodynamics
insufficient, and distinguishes it from exergy analysis—is the minimization of the calculated
entropy generation rate. To minimize the irreversibility of a proposed design the analyst must
use the relations between temperature differences and heat transfer rates, and between pressure
differences and mass flow rates. He or she must relate the degree of thermodynamic non ideality
of the design to the physical characteristics of the system, namely to finite dimensions, shapes,
materials, finite speeds, and finite-time intervals of operation. For this the analyst must rely on
heat transfer and fluid mechanics principles, in addition to thermodynamics. Only by varying
one or more of the physical characteristics of the system, can the analyst bring the design closer
to the operation characterized by minimum entropy generation subject to finite-size and finite-
time constraints.
Fig 1: The interdisciplinary field covered by the method of entropy generation and minimization
As a final comment on Fig. 1, note that thermodynamics is a ‘‘foundation’’ that should be
visible (i.e., present, and taught and used) in related disciplines such as heat transfer and
thermodynamics. Historically, however, thermodynamics was formulated after heat transfer, and
long after mechanics
INFLUENCE May 2016
Department of Mechanical Engineering Page | 23
(e.g., Ref. 1). The interdisciplinary domain that is now being mapped by the research on EGM or
finite-time thermodynamics is finally bridging the gap between thermodynamics and the other
thermofluid engineering disciplines.
Heat transfer:
The field of heat transfer engineering adopted the techniques developed in cryogenics and
applied them to many classes of devices for promoting heat transfer. The optimization was
carried out at two levels of complexity: complete components (e.g., heat exchangers), and
elemental features (e.g., fins, ducts). The field is vast, therefore in this section we review only
some of the most basic examples. ˙
The objective of this study is to devise concrete methods for minimizing entropy generation in
engineering equipment for heat transfer.
Coincidentally, this goal agrees with the objective of contemporary thermal design practice.
In very board terms the contemporary heat transfer design objective fall into two categories
1) Heat transfer augmentation problem
2) Thermal insulation problem
Net heat transfer rate between two surface (T1, T2)
= ℎA(T1-T2)
A heat transfer augmentation problem is one in which the thermal conductance ℎ is to be
increased. Here increasing the thermal conductance leads to improve thermal contact. At the
same time, the entropy rate also decreases
=( )
In thermal insulation problem, the objective is to minimize the effective thermal conductanceℎ ,
in such a problem the two temperature T1,T2 are fixed, hence ℎ decreases so does the heat
transfer rate between two surfaces.
Problem Temperature
difference Heat transfer rate
Entropy generation
rate
1) Thermal contact
enhancement To be reduced Fixed Reduced
2) Thermal
insulation Fixed To be reduced Reduced
INFLUENCE May 2016
Department of Mechanical Engineering Page | 24
In both the cases the minimization of entropy generation is done.
Here we consider the topic of entropy generation minimization in convective heat transfer.
The irreversibility of convective heat transfer is seen to be due to two effects:
Heat transfer across a finite temperature difference and fluid friction.
Conclusion:
A revolution is taking place in thermodynamics, and it amounts to the bridging of the gaps
between thermodynamics, heat transfer and fluid mechanics.
From the present study it has been brought under consideration that the entropy generation for
any system is nothing but the destruction of exergy available in the system. The entropy
generation Sgen depends solely on the degree of the thermodynamic irreversibility of the system.
If engineering systems and their components are to operate such that the destruction of available
work is minimized, then the design of such systems and components must begin with the
minimization of entropy generation.
References
1 Adrian Bejan, Entropy Generation Minimization (The method of thermodynamic
optimization of finite size systems and finite time processes), CRC press, pg(21-149)
2 Ibrahim Dineer and Yunus Cengel , Energy Entropy and Exergy concepts and their roles in
Thermal engineering , published 21st August 2001.
3 Varun Singh , Vikrant Aute and Reinhard Radermacher , Usefulness of Entropy
Generation Minimization through a heat exchanger modelling Tool,
http://docs.lib.purdue.edu/iracc
4 Adrian Bejan, Entropy Generation Minimization (The method of thermodynamic
optimization of finite size systems and finite time processes), AIP publication llp ,
http://jap.aip.org/
INFLUENCE May 2016
7. MicroGroove® Technology
Ghanashyam M. Chendke
Asst. Professor
Copper tubes are used in the heat exchangers as it has many desirable properties for
efficient heat transfer & durability.Firstly,copper is an excellent conductor of heat as it is having
high thermal conductivity which allows heat to pass through it rapidly. Other desirable
properties include its corrosion resistance, biofouling resistance, maximum allowable stress and
internal pressure, creep rupture strength, fatigue strength, hardness, thermal expansion, specific
heat, antimicrobial properties, tensile strength, yield strength, high melting point, alloy ability,
ease of fabrication and ease of joining.[1]
Due to these properties, copper is widely used in heat exchangers for industrial use,
Heating, Ventilating & Air Conditioning Industries (HVAC) systems. A research team
supported by ICA (International Copper Association) developed the MicroGroove® technology
platform, which utilizes small diameter (≤ 5 mm) inner-grooved copper tubes.The motto behind
the research was to reduce the consumption of copper used in HVAC systems. [2] [3] [4]
The
MicroGroove®
technology enhances heat transfer due to grooving the inside surface of the tube.
This gives additional benefits like increase in the surface to volume ratio, mixing of the
refrigerant and homogenizes refrigerant temperatures across the tube. [4][5][6][7]
Weight Reduction:
In a study, it is found that for equivalent 5-kW HVAC heat exchangers, tube materials in
the coils weighed was 3.09 kg for 9.52-mm diameter tube, 2.12 kg for 7-mm diameter tube, and
1.67 kg for 5-mm diameter tube. So there is nearly tube weight was reduced by 46% when
copper tube diameters were downsized from 9.52-mm to 5 mm. [4] [5]
Fig.1.Microgroove Tubes [8]
Fig.2. Microgroove Tubes –a closer view [9]
INFLUENCE May 2016
Department of Mechanical Engineering Page | 26
Design consideration:
When the small diameter copper tubes are used, higher pressures are required to
condense the refrigerant. Working pressure is directly proportional to wall thickness and
inversely proportional to diameter. In other words, for tubes with the same thickness, smaller
diameter tubes can withstand higher pressures than larger diameter tubes.The refrigerant
‘pressure drop’ increases for smaller diameter tubes. More work is required to circulate the
refrigerant through a given length of tube when the pressure drop is high. This pressure drop can
be offset by designing coils with shorter tube lengths. [4] [10]
Refrigerant:
Propane (R290) is an eco-friendly refrigerant, having less pressure requirement as
compared to CO2 with outstanding thermodynamic properties but R290 is extremely flammable.
[4][11] Research has demonstrated that MicroGroove is suitable for R290-charged room air
conditioners because the refrigerant charge requirement is dramatically reduced with smaller
diameter copper tubes. The risk of tube explosions is dramatically reduced as well. [4] [12] [13]
Benefits of MicroGroove® Technology:
The technology is eco-friendly as Propane (R290) - an eco-friendly refrigerant can be
used safely with lower GWP (global warming potential) and ODP (ozone depletion potential)
ratings. [14]
Energy efficiency and reduced overall system size can be achieved at a lower material
cost with smaller diameter tubes,. Smaller tubes result in reduced usage of tube materials, fin
materials and refrigerants, contributing to overall reduction in system cost. Also, as mentioned,
smaller diameter tubes can operate at higher pressures. Copper tube offers other advantages,
such as corrosion resistance, durability, superior properties and familiar manufacturing methods.
[10]
References:
1. https://en.wikipedia.org/wiki/Copper_in_heat_exchangers
2. http://copperalliance.org/core-initiatives/technology/technology-projects/
3. http://www.microgroove.net/
4. https://en.wikipedia.org/wiki/Copper_MicroGroove
INFLUENCE May 2016
Department of Mechanical Engineering Page | 27
5. FAQs: Thirty Questions with Answers about Economical, Eco-friendly Copper Tubes for
Air Conditioner Applications; http://www.microgroove.net/sites/default/files/overview-
ica-questions-and-answers-qa30.pdf
6. Microgroove Brochure: http://www.microgroove.net/sites/default/files/microgroove-
brochure-game-changer.pdf
7. Microgroove™ Update Newsletter: Volume 1, Issue 2, August 2011:
http://www.microgroove.net/sites/default/files/4315_microgroove_newsletter_august_2.
8. http://www.appliancedesign.com/articles/94226-smaller-diameter-copper-tubes-support-
manufacturing-and-design
9. https://www.researchgate.net/figure/223415444_fig1_Fig-1-Micro-groove-fin-inside-
tubes-of-copper
10. http://www.copper.org/applications/plumbing/comml_tube/hvac_info.html
11. Microgroove™ Update Newsletter: Volume 1, Issue 3, December
2011:http://www.microgroove.net/sites/default/files/4473_ica_microgroove_nl_final.pdf 12. Principle of Designing Fin-And-Tube Heat Exchanger with Smaller Diameter Tubes for
Air Conditioner" by Wei Wu, Guoliang Ding, YongxinZheng, YifengGao and Ji Song,
The Fourteenth International Refrigeration and Air Conditioning Conference, Purdue
University, July 2012;
http://www.conftool.com/2012Purdue/index.php?page=browseSessions&abstracts=show
&mode=list&search=2223
13. "Developing Low Charge R290 Room Air Conditioner by Using Smaller Diameter
Copper Tubes" by Guoliang Ding, Wei Wu, Tao Ren, YongxinZheng, YifengGao, Ji
Song, Zhongmin Liu and Shaokai Chen; The Tenth IIR Gustav Lorentzen Conference on
Natural Refrigerants, June 2012 (GLC)
14. http://copperalliance.org/wordpress/wp-content/uploads/2014/04/ICA-AR2013-lowres-
web-r2.pdf
INFLUENCE May 2016
Department of Mechanical Engineering Page | 28
8. Maintainability
Ganesh N. Rakate
Asst. Professor
1. Introduction
Maintainability is a design parameter intended to reduce repair time, as opposed to
maintenance, which is the act of repairing or servicing an item or equipment. The history of
maintainability can be traced back to 1901 when the U.S. Army Signal Corps contracted for the
development of the Wright brothers’ airplane contained a clause that the aircraft should be
“simple to operate and maintain.”
2. Maintainability Terms and Definitions, Importance, and Objectives
Some of the terms and definitions associated with maintainability are as follows:
Maintainability: The probability that a failed item/equipment will be restored to acceptable
working condition.
Maintainability engineering: An application of scientific knowledge and skills to develop
equipment/item that is inherently able to be maintained as measured by favorable maintenance
characteristics as well as figures-of-merit.
Maintainability model: A quantified representation of a test/process to perform an analysis of
results that determine useful relationships between a group of maintainability parameters.
Downtime: The total time in which the item/equipment is not in a satisfactory operable
condition.
Serviceability: The degree of ease/difficulty with which an item/equipment can be restored to
its satisfactory operable state.
Maintainability function: A plot of the probability of repair within a time given on the y-axis,
against maintenance time on the x-axis and is useful to predict the probability that repair will be
completed in a specified time.
There are many factors responsible for the importance of maintainability. In particular,
alarmingly high operating and support costs, due to failures and subsequent maintenance, are
among the most pressing problems. These problems were even more apparent in the early days
of the maintainability field. The main objective of maintainability is to maximize equipment and
facility availability. The other maintainability objectives include: reduce predicted maintenance
time and costs by simplifying maintenance through design, determine labor-hours and other
resources needed to perform the projected maintenance, and use maintainability data to
determine item availability/unavailability.
3. Maintainability Management in System Life Cycle
INFLUENCE May 2016
Department of Mechanical Engineering Page | 29
An efficient and effective design can only be achieved by seriously considering
maintainability issues that arise during the system life cycle. This means a maintainability
program must incorporate a dialogue between the manufacturer and user throughout the system
life cycle. This dialogue concerns the user’s maintenance needs and other requirements for the
system and the manufacturer’s response to these needs and requirements.
The life cycle of a system can be divided into the following four phases:
Phase I: Concept development
In Phase I, high risk areas are identified and system operation needs are translated into a set
of operational requirements. The primary maintainability concern during this phase is the
determination of system effectiveness needs and criteria, in addition to establishment of the
maintenance and logistic support policies and boundaries required to satisfy mission objectives
by using operational and mission profiles.
Phase II: Validation
During Phase II, operational requirements developed and formulated in the previous phase
are refined further with respect to system design requirements. The prime objective of validation
is to ensure that full-scale development does not begin until factors such as costs, performance
and support objectives, and schedules have been effectively prepared and evaluated.
Phase III: Production
In Phase III, the system is manufactured, tested, and delivered, and, in some cases, installed per
the technical data package resulting from Phases I and II. Although the maintainability
engineering designs efforts will largely be completed by this time.
Phase IV: Operation
In Phase IV, the system is used, logistically supported, and modified as appropriate. During the
operation phase maintenance, overhaul, training, supply, and material readiness requirements
and characteristics of the system become clear. Although there are no particular maintainability
requirements at this time, the phase is probably the most crucial because the actual cost-
effectiveness and logistic support of the system are demonstrated. In addition, maintainability-
related data can be obtained from the real life experience for future use.
4. Maintainability Design Characteristics and Specific Considerations
There are many maintainability-related system/item characteristics that must be emphasized
during design. Some of these are: modular design, interchangeability, displays, human factors,
safety, test points, standardization, controls, illumination, weight, lubrication, accessibility,
installation, training needs, adjustments and calibration, tools, labeling and coding, test
equipment, manuals, work environment, covers and doors, size and shape, failure indication
(location), connectors, and test hookups and adapters. The most commonly cited/mentioned
INFLUENCE May 2016
Department of Mechanical Engineering Page | 30
maintainability-related characteristics by professionals involved with maintainability include:
displays, controls, doors, covers, labeling and coding, accessibility, test points, checklists,
mounting and fasteners, handles, connectors, test equipment, charts, aids, and manuals. Some of
these factors are discussed below.
Accessibility
This may be described as the relative ease with which an item can be reached for replacement,
service, or repair. Inaccessibility is a frequent cause of ineffective maintenance, thus an
important maintainability problem. Many factors can affect accessibility.
Modularization
Modularization may be described as the division of a product into functionally and
physically distinct units to permit removal and replacement. The degree of modularization in a
system or product depends on factors such as cost, practicality, and function. Every effort should
be made to use modular construction wherever it is logistically feasible and practical as it helps
reduce training costs, in addition to other concrete benefits.
Some advantages of modularization are: relative ease in maintaining a divisible
configuration, less time-consuming and -costly maintenance staff training, simplified new
equipment design and shortened design time, easy to divide up maintenance responsibilities,
lower skill levels and fewer tools required, existing product or equipment can be modified with
the latest functional units replacing their older equivalents, fully automated approaches can be
employed to manufacture standard “building blocks,” and easy recognition, isolation, and
replacement of faulty items leading to more efficient maintenance, thus lower equipment
downtime.
Disposable Modules
Disposable modules are designed to be discarded rather than repaired after a failure.
They are used in situations when repair is costly or impractical. Their advantages outweigh the
disadvantages, and maintainable modules require significant expenditure in materials, labor
time, and tools.
The important benefits of a disposal-at-failure design include simpler and more concise
trouble-shooting approaches; smaller, simpler, and more durable modules with a more reliable
design; fewer types of spare parts required; reduction in required tools, personnel, facilities, and
repair time; improved reliability due to the sealing and potting methods; and better
standardization and interchangeability of modules.
Interchangeability
Interchangeability may be defined as an intentional aspect of design; any
part/component/unit can be replaced within a given item by any similar part/component/unit.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 31
There are two distinct types of interchangeability: physical and functional. In physical
interchangeability, two items can be connected, used, and mounted in the same location and in
the same manner. With functional interchangeability, two given items serve the same function.
The basic principles of interchangeability include: liberal tolerances in the items
requiring frequent replacement and servicing of parts because of wear or damage, that each part
must be completely interchangeable with each other similar part, and that the items expected to
function without part replacement strict interchangeability could be uneconomical.
Standardization
Standardization may be described as the attainment of maximum practical uniformity in an
item’s design. Although standardization should be a central goal of design because use of
nonstandard parts can result in lower reliability and increased maintenance, it must not be
permitted to interfere with advances in technology or improvements in design
References
1. Dhillon, B.S., Engineering Maintenance- A Modern Approach, CRC Press Co., Boca
Raton, Florida, 2002.
2. Dhillon, B.S., Engineering Maintainability, Gulf Publishing Co., Houston, Texas, 1999.
3. Retterer, B.L. and Kowalski, R.A., Maintainability: a historical perspective, IEEE
Transac. Reliability, 33, 1984, 56–61.
4. Smith, D.J. and Babb, A.H., Maintainability Engineering, John Wiley & Sons, New York,
1973.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 32
9. Burnishing process
H.G.Patil
Asst. Professor
Burnishing is a polishing and work hardening of a metallic surface. This process will
smooth and harden the surface, creating a finish which will last longer than one that hasn’t been
burnished. This is a desirable characteristic in clock making and watch making since the major
determining factor in how long a clock or watch will run is the degree to which the bearings will
wear over time. Long-lasting bearing surfaces, like pivots and pivot holes, will greatly increase
the length of time that a clock or watch will perform as expected. Although this article is
primarily targeting the burnishing process for clocks, the same methods may be applied to
watches.
Why Not Burnish?
In watch making German craftsmen use plated steels which do not react well to the
burnishing process. The pivots are very soft and it is only the plating that keeps them from
wearing out immediately. Once the plating is gone, the only choice is to replace the pivot or
replace the whole assembly. These pivots can be polished but the plating is easily damaged. It
can come off as a cylinder, after breaking loose at the shoulder, during the polishing process. To
polish these, it is best to use the square edge of a piece of soft wood, (like the handle of a buff
stick) charged with Simi chrome. The soft wood won’t damage the plating and the Simi chrome
is a mild enough abrasive so that very little of the plating is removed .
Polishing vs. Burnishing
Even though polishing occurs as a result of burnishing, polishing in itself is not
burnishing. The distinction is that polishing will produce a smooth finish, but not a hard one.
Polishing is more about removing material to obtain the desired finishing where as burnishing is
a stretching and hardening with minimal material loss. It is impossible to tell by looking at a
pivot whether it has been polished or burnished but burnishing will generally result in a deeper
polish than is possible with polishing alone.
When polishing or burnishing
Polishing is an important first step in the burnishing process and a test of the craftsman’s
ability to bring a pivot to the point it can be burnished. A poor polish will make it very difficult
to bring the pivot to perfection with burnishing alone. Polishing can be done with any kind of
abrasive or cutting tool sandpaper, or stones are common polishing agents and are less
INFLUENCE May 2016
Department of Mechanical Engineering Page | 33
expensive than other polishing methods. Polishing powders and diamond compounds can be
used for polishing but this can introduce the risk of embedding the polishing material in the
piece being polished. The polishing is a necessary first step in the burnishing process.
Introduction
The metallic rolling processes produce work hardened and smooth surfaces , similarly
knurling process on lathe operation produces hard and cold worked surface. The similar
results are also produced by using burnishing process.
Roller burnishing helps users to eliminate secondary operations for substantial time and cost
savings, while at the same time improving the quality of their product. Roller burnishing is a
method of producing an accurately sized, finely finished and densely compacted surface that
resists wear. Hardened and highly polished steel rollers are brought into pressure contact with a
softer work piece. As the pressure exceeds the yield point of the work piece material, the surface
is plastically deformed by cold flowing of subsurface material. A burnished surface is actually
smoother than an abrasively finished surface of the same profilometer reading. Profilometers
measure roughness height. Abrasive metal removal methods lower the roughness height. But,
they leave sharp projections in the contact plane of the machined surface . Roller burnishing is a
metal displacement process. Microscopic “peaks” on the machined surface are caused to cold
flow into the “valleys”, creating a plateau like profile in which sharpness is reduced or
eliminated in the contact plane. The burnished surface will therefore resist wear better than the
abraded surface in metal to metal contact, as when a shaft is rotating in a bushing. There are two
types of burnishing processes.
1). Ball burnishing and 2). Roller burnishing.
The ball materials are hardened alloy steel, carbide, diamond, etc. The roller material is
hardened alloy steel. Residual compressive stresses are induced on the surface of the burnished
components, which results in fatigue resistance and improvement in wear resistance quality.
Applications of Roller Burnishing
Roller burnishing was first applied in American industry in the 1930s to improve the fatigue
life or railroad car axles and rotating machinery shafts. By the 1960s, roller burnishing was more
widely applied, particularly in the automotive industry, as other process advantages were
recognized. The primary benefits, related to part quality, are as follows:
INFLUENCE May 2016
1. Accurate size control (tolerances within 0.0005 inch or better, depending on material
types and other variables).
2. Surface finished (typically between 1 to 10 micro inches Ra).
3. Surface hardness (by as much as 5 to 10 % or more).
Roller burnishing has long been used on a wide variety of automotive and heavy equipment
components (construction, agricultural, mining and so on), including piston and connecting rod
bores, brake system components, transmission parts and torque converter bubs.
Burnishing tools are also now widely applied in non automotive applications for a variety of
benefits; to produce better and longer lasting seal surfaces; to improve wear life; to reduce
friction and noise levels in running parts; and to enhance cosmetic appearance. Examples
include valves, pistons of hydraulic or pneumatic cylinders, lawn and garden equipment
components, shafts for pumps, shafts running in bushings, bearing bores, and plumbing fixtures.
Advantages of Roller Burnishing:
1. Roller burnishing is a faster, cleaner, more effective and more economical method of
sizing and finishing parts to exacting specifications.
2. Fantastic Mirror like surface finish.
3. Consistent dimensional tolerance and Repeatability.
4. Single pass operation offers very less cycle time.
5. Increases the surface hardness of components.
6. Reduces the reworks and rejections.
Roller Burnishing Method
INFLUENCE May 2016
Fig. Roller burnishing tool
Burnishing experiments are conducted on turned mild steel work piece, which is ductile and
available commercially in the form of round bars. The roller burnishing tool assembly is kept in
the tool holder of dynamometer and it is held rigidly by two bolts.
Burnishing Force i.e., radial component of cutting force (in ydirection) is measured by
dynamometer. A repetitive or random deviation from the nominal surface which forms the
pattern of the surface is known as surface texture. It includes roughness, waviness, flaws, etc.
Waviness is due to the geometric errors of machine tool and varying stiffness of the machine
tool. Various parameters of surface roughness i. e. Ra, Rz, Rmax are measured by using Surface
Roughness Tester,
Fig. Surface hardness and burnishing force
INFLUENCE May 2016
Fig. Surface finish and number of tool passes
Conclusions
1. The surface hardness of mild steel specimens increases with increase in the burnishing force
up to 42 kgf. Further increase of burnishing force results in the decrease of surface hardness on
mild steel specimens. The maximum surface hardness obtained is 70 HRB.
2. Maximum reduction in surface roughness is observed in first five passes on mild steel by
Roller Burnishing operation.
References
1. Murthy R.L.,and Kotiveerachary B. (1981), “Burnishing of Metallic Surfaces – A Review”,
Precision Engineering Journal, 3, pp 172179.
2. Shneider Yu.G.(1967), “Characteristics of burnished components”, Mechanical Tooling
Journal, 38, pp 1922.
3. Hassan, A.M., (1997), “The effects of ball and roller burnishing on the surface roughness and
hardness of some nonferrous metals”, Journal of Materials Processing Technology, 72, pp
385391.
4. Thamizhmnaii, S., Bin Omar, B., Saparudin, S. and Hassan, S., (2008), “Surface roughness
investigation and hardness by burnishing on titanium alloy”, J of Achievements in Materials and
Manufacturing Engineering, 28, pp 139142.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 37
10. SUPERCONDUCTIVITY & DISCOVERY OF HELIUM
Jyoti S. Jadhav
Asst. Professor
The roots to the discovery of superconductivity originate in the discovery of helium and its
liquefaction. The existence of the new element (helium) in Sun came to light while the French
astronomer Jules Janssen was observing the solar eclipse (a bright yellow line in the spectra) of
August 18, 1868 at Guntur in the state of Andhra Pradesh, India Lockyer and Frankland
confinned Janssen's results and named the new element as "helium" (after "Helios" - Sun) in
October 20, 1868. In 1892 Luigi Palmieri detected the existence of helium on Earth and in 1895
Sir William Ramsay isolated helium on Earth [I]. Helium was first liquefied by Kamerlingh
Onnes on July 10, 1908 in Leiden, Holland [3, 4]. The initial attempts to liquefy helium and to
confinn the results were extremely difficult, as the apparatus used was very complicated. They
used a cascade process with liquid air and liquid hydrogen for pre-cooling the compressed
helium followed by a Joule-Thompson expansion process for the final liquid production [4, 5].
In 1905, large reserves of helium were found in the natural gas fields at Dexter, Kansas by Cady
and McFarland of the University of Kansas at Lawrence [6]. Additional large sources were
found in other natural gas fields of the United States. The special properties of helium for critical
applications, particularly for defense, were recognized during World War I by the US
Government who subsequently enacted the Helium Conservation Act in 1925. This has since
been replaced by the Helium Privatization Act of 1996. Although the US holds 21% of the
world's estimated helium reserves, presently it produces 77% of the world's helium [7]. The
remaining 79% of the world's reserves remain mostly within untapped large sources in Russia
[8], as well as in Algeria and Qatar, who provide helium to the European and Asian markets,
respectively. With the US exports substantially rising and US consumption falling (though still
the highest in the world at about 45% [9]), it has been estimated that the US helium-rich sources
will be depleted in less than 30 years [10]. Global demand for helium is projected to grow from
the present 175 to 300 million cubic meters per year by 2030 [8].
DISCOVERY OF SUPERCONDUCTIVITY
After the production of liquid helium, Kamerlingh Onnes conducted experiments to determine
the effect of low temperature on the electrical resistance of pure metals. As the temperature of
the material approached absolute zero, he wanted to see if one of three results would occur;
whether the electrical resistance would (I) reduce to zero, or (2) to a minimum and stay constant,
or (3) to a minimum and then increase to a maximum (behaving like an insulator). Experiments
conducted under Onnes direction by Gilles Holst with mercury resulted in the discovery of
INFLUENCE May 2016
Department of Mechanical Engineering Page | 38
superconductivity in 1911, 100 years ago [4]. This significant discovery has lead to many other
technological developments and associated· applications.
In addition to a critical temperature, above which the superconductor will revert from a
superconducting to a nonnal electrical state, superconductors also exhibit a critical magnetic
field, above which the same occurs. Superconductors that exhibit a pure Meissner Effect,
completely excluding externally applied magnetic fields from its interior, are known as Type I
superconductors. These are all pure metals and have transition temperatures ranging up to
approximately 9.5 K. Their behavior is described by the phenomenological Ginzburg-Landu
theory in 1950 [11] which explained the macroscopic properties of superconductors. This theory
has been largely superseded by BCS Theory in 1957 [12, 13] which is a complete microscopic
theory of superconductivity and is named after its originators John Bardeen, Leon Cooper, and
Robert Schrieffer. In the 1930's the first Type II superconductors were discovered, though not
recognized until after the Meissner Effect was discovered in 1933 by Walther Meissner and
Robert Ochsenfeld. Type II superconductors have much higher critical magnetic fields and are
distinguished from Type I superconductors by exhibiting a mixed state, also called a vortex
state, where the external magnetic field is only partially excluded; so that part of the material is
superconducting and part is normal. In 1986, high temperature superconductors (HTS) were
discovered by Muller and Bednorz, having created a brittle ceramic compound of lanthanum,
barium; copper and oxygen that had a critical temperature of 30 K. A year later, teams at the
University of Houston and at the University of Alabama-Huntsville replaced the lanthanum with
yttrium achieving a critical temperature of 92 K. Many milestones have been since reached.
These are ceramic based compounds and are also classified as Type II superconductors with
critical temperatures that are above the normal boiling point of liquid nitrogen [14].
Superconducting technology not only brought some new technologies to light which would
otherwise not be possible, it also reduced the cost and improved the efficiency, .quality, size and
weight of other technologies, especially in the area of scientific accelerator technologies.
APPLICATIONS OF SUPERCONDUCTIVITY
Among the many applications which utilize superconducting technology, a few are listed below:
1. Detectors: Josephson junctions, Superconducting Quantum Interference Devices
(SQUIDS), fast digital circuits
2. Science: Particle accelerators using superconducting magnets and/or SRF cavities, fusion
research
INFLUENCE May 2016
Department of Mechanical Engineering Page | 39
3. Biomedical applications: Magnetic Resonance Imaging (MRI), Cryosurgery, SQUID
devices, EEG, Magneto Encephalography (MEG) to study the neural activity in brain,
Magneto Cardiography (MCG)
4. Transportation: MagLev (super-conducting magnetic levitation - used for mass transit
rail or orbital insertion of payloads)
5. Military: Superconducting electric motors for ship propulsion, microwave radar
6. Energy: SMES (for energy storage, power quality, pulse power, etc.), low loss power
transmission systems
7. Other: Cryogenically cooled super computers, communication systems, ore separation,
SQUID based Non Destructive Testing (NDT)
Many of these low-temperature superconducting (LTS) and some HTS applications require
helium refrigerators in order to operate. The cold must be supplied and continuously maintained
by cryogenic fluid or by a cryogenic refrigerator, or a combination of both. To produce and
maintain any temperature significantly below ambient is a challenge, but to keep temperatures
within a few kelvin of absolute zero is a great challenge that continues today, 100 years after the
discovery of superconductivity. In essence, superconductivity is dependent on cryogenics at the
present time. The key cryogenic engineering elements include refrigeration, thermal insulation,
and delivery of the cold to the heat load. Efficiency, practicality and simplicity are the keys to
success in all these elements.
REFERENCES:
[1] Emsley, 1., "Nature's Building Blocks," Oxford University Press. Oxford, UK, 2001, pp.
175-179.
[2] Kochhar, R. K. (1991). "French astronomers in India during the 17th - 19th centuries".
Journal of the British Astronomical Association 101 (2): 95-100. Bibcode 1991JBAA..
I0l...95K
[3] Van Delft, D., Kes, P., "The discovery ofsuperconductivity," Physics Today, Sept. 2010,
pp. 38-43.
[4] KG. Scurlock, History and Origins ofCryogenics, Monographs on Cryogenics-8, Oxford
Science Publications, 1992.
[5] Dirk Van Delft, Museum Boerhaave, "Heike Kamerlingh Onnes and the Road to Liquid
Helium". IEE/CSC & ESAS European Superconductivity News Form (ESNF), No. 16,
April 201 1.
[6] Cady, H.P.; McFarland, D. F. (1906). "Helium inNatural Gas". Science24 (611): 344.
doi:l 0.1 126/science.24.611.344
INFLUENCE May 2016
Department of Mechanical Engineering Page | 40
11. Production Part Approval Process (PPAP)
Kiran I. Nargatti
Asst. Professor
The Production Part Approval Process (PPAP) is used to establish confidence in component
suppliers and their production processes, by demonstrating that: "all customer engineering
design record and specification requirements are properly understood by the supplier and that
the process has the potential to produce product consistently meeting these requirements during
an actual production run at the quoted production rate." The PPAP provides customers with
evidence that component suppliers have understood their requirements, the product meets the
customer’s requirements and the production process is capable of consistently producing
conforming product. PPAP is initially developed by AIAG (Auto Industry Action Group) in
1993 with input from the Big 3 - Ford, Chrysler, and GM. Presently the AIAG’s 4th edition is
effective June 1, 2006 is the most recent version.
PPAP submission:-
PPAP submission is required with any significant change to product or process. Usually PPAP
submission is required in the following situations,
New part
Engineering change(s)
Tooling: transfer, replacement, refurbishment, or additional
Correction of discrepancy
Tooling inactive > one year
Change to optional construction or material
Sub-supplier or material source change
Change in part processing
Parts produced at a new or additional location
PPAP Requirements:-
The following documents and items must be completed by the supplier for each part before
approval as necessary
1. Design Records:- It is the copy of the drawing, if supplier is responsible for designing
the part. The purpose of design records and ballooned drawings is to document the
formal part print and any additional engineering records for reference
2. Authorized Engineering Change Documents:- It is a document that shows the
detailed description of the change. Usually this document is called "Engineering Change
Notice", but it may be covered by the customer PO or any other engineering
authorization
INFLUENCE May 2016
Department of Mechanical Engineering Page | 41
3. Customer Engineering Approval, if required:- This approval is usually the
Engineering trial with production parts performed at the customer plant. A "temporary
deviation" usually is required to send parts to customer before PPAP. Customer may
require other "Engineering Approvals"
4. Design Failure Modes and Effects Analysis (DFMEA):- Design FMEA means Design
Failure Modes and Effects Analysis and shows evidence that the potential failures modes
and their associated risks have been addressed to eliminate or minimize their effects
through product design changes and improvements
5. Process Flow Diagram:- Process Flow Diagrams (PFD) are used to document and
clarify all the steps involved in the manufacturing of a part. The Primary process steps
must match the process steps addressed in PFMEA and the control plan. Process flow
should include the entire manufacturing process flow (receiving through shipping)
6. Process Failure Modes and Effects Analysis (PFMEA):- PFMEA stands for Process
Failure Modes and Effects Analysis. This shows evidence that the potential failure
modes and the associated risks have been assessed during the manufacturing process
design stage to eliminate or minimize their effects on the part/product.
7. Control Plan:- The Control plan is a document which provides the information on
controls that are being established in the process to control the Product and Process
characteristics for all the processes involved in the production of the part. It is a
derivative document of the Process Flow Diagram & PFMEA to address the Process
characteristics & the failure modes in the process.
8. Measurement Systems Analysis (MSA):- This contains the Gage R&R for the critical
or high impact characteristics, and a confirmation that gauges used to measure these
characteristics are calibrated
9. Dimensional Results:- A list of every dimension noted on the ballooned drawing. This
list shows the product characteristic, specification, the measurement results and the
assessment showing if this dimension is "ok" or "not ok".
10. Records of Material / Performance Test Results:- A summary of every test performed
on the part. It is an evidence that supplier is using the correct Raw material/Grade as per
design record. Supplier can submit the Material certificate either by getting one from raw
material supplier or by doing the certification testing at an outside laboratory
11. Initial Process Studies:- This section shows all Statistical Process Control charts
affecting the most critical characteristics.
12. Qualified Laboratory Documentation:- Copy of all laboratory certifications of the
laboratories that performed the tests
INFLUENCE May 2016
Department of Mechanical Engineering Page | 42
13. Appearance Approval Report (AAR):- Appearance report should contain the part
images, Part Marking images, Painting test requirements like Adhesion test report, RAL
shade card color confirmation wherever applicable as per drawing and specification
requirements.
14. Sample Production Parts:- A sample from the same lot of initial production run is
need to be submitted
15. Master Sample:- Supplier retain one or more samples of the PPAP parts at their location
with appropriate identification and traceability as per their internal quality system
requirements. This will be helpful for any future references
16. Checking Aids:- Checking aid refers to the document containing the list of measuring
instruments, gauges, equipment’s and other fixtures used for qualifying the parts during
regular production
17. Customer-Specific Requirements:- Each customer may have specific requirements to
be included on the PPAP package like Sampling Quality Plan, Raw material Quality
plan, Master Raw material Color Code Chart, etc.
18. Part Submission Warrant (PSW):- This is the form that summarizes the whole PPAP
package. This form shows the reason for submission (design change, annual revalidation,
etc) and the level of documents submitted to the customer. If the submitted PPAP meets
all the necessary requirements, the PPAP team representative will provide approval
through authorized signatory in the PW and send the approved PW to the supplier.
PPAP Submission Levels
Level 1: Production Warrant & 5 samples with Material/Process/Appearance Report
Level 2: PW with Limited supported documents
Level 3: PW with complete supporting documents
Level 4: PW and requested documents
Level 5: PW with Product samples & complete supporting document available for review at
suppliers manufacturing location of supplier
PPAP Status:-
PPAP disposition status is communicated to the supplier as per below
Approved:- The part meets requirements and supplier is authorized to ship production
quantities of the part
INFLUENCE May 2016
Interim Approval:- Permits shipment of part on a limited time or piece quantity basis. Root
causes of nonconformities defined and action plan agreed by customer
Rejected:- Parts submitted and accompanying documentation does not meet customer
requirements. Resubmission required before production quantities can be supplied.
Reference:-
1. AIAG Production Part Approval process PPAP 4th Edition
2. NCR PPAP user Guide
http://www.ncr.com/wp-content/uploads/ppap-user-guide1.pdf
3. DaimlerChrysler, Ford Motor Company General Motors Corp. Production Part Approval
Process (PPAP) requirements manual
INFLUENCE May 2016
Department of Mechanical Engineering Page | 44
12. OSCILLATING I. C. ENGINE
Kiran J. Burle
Asst. Professor
Oscillating internal combustion engine is introducing an innovative technology to substitute
pistons and crank gear with oscillating flaps. It is evident when investigating the cycle of
internal combustion engine, only 40% or lesser amount of heat energy is converted in to useful
work and the major part of energy is dissipated as losses. This new internal combustion engine
generates about 1.55 times higher power. The same static pressure arrangement is applicable for
a variety of combustible mixtures that could be used as alternative fuels. Oscillating Engine is
minimizing internal resistance to boost power and also efficiency. Using this technology we are
able to save fuel by about more than 40% . The idea of creating oscillating engine is to find a
method of transfer gas pressure force into driving force virtually without any loss.
Oscillating engine uses the total gas pressure to drive the shaft, which will be converted
to complete rotary movement at the next stage. This engine doesn't apply crank for power
generation purpose, which usually generates considerable internal resistance to the system.
During idle running, most power is spent merely to overcome the internal resistance. So that
oscillating engine consume very small amount of fuel during idle running. Similarly, to start the
engine, it requires a small battery power. In conventional engine there is always a lateral thrust
developing on cylinder walls by the piston rings, resulting wear of the engine parts. In
oscillating engine no lateral thrust is acted on walls of the working chamber or barrel, and the
engine can operate for a long period. Oscillating engine is having two compartments generating
two working areas. As a result the engine dimensions are minimized, as well as the weight. The
oscillating engine is having a higher efficiency with all this factors, and the emissions of exhaust
gases will be lesser, reducing the environmental pollution.
OSCILLATING ENGINE INCORPORATES
1. Oscillations Generator and
2. Oscillations converter coupled by the Oscillating shaft.
OSCILLATIONS GENERATOR
Combustion cycle take place in this part and impart motion on oscillating shaft. Principle
elements of this power generator are working head, working vessel and the flap accommodating
inside. The cross section of the housing constitutes a semicircular segment having cylindrical
profile. Inside the housing a flap can oscillate about the axis of the shaft which is fitted to the
flap. The flap is movable slidably with respect to walls of the housing, thereby defining two
working areas in centrally divided housing. The engine head provide seals between two working
areas. Intake & exhaust ports are operated by valves from overhead camshafts. Oscillations
INFLUENCE May 2016
generated, due to thermal cycle with this arrangement conveyed to the oscillations converter
coupled by the oscillating shaft.
The pair of flaps accommodates the two working spaces created by the combination of
working barrel and engine head fitted over it. Oscillations generate due to combustion cycle
taking place inside working areas, and impart motion on this shaft. The shaft fitted to flaps
conveys oscillations to ratchet wheel. On the oscillating shaft next to each gear wheel, a rachet
wheel is mounted on bearings, accompanied by shuttle comprising a set of spring loaded pawls
fitted on projected flanges, engaging the ratchet wheels. Two sets of spring loaded pawls are
engaging with ratchet wheel to transfer the swings of both directions. Each attachment of set of
pawls is accompanied with a gear wheel, running freely on oscillating shaft & coupled to
another gear to transfer its motion. Next to each ratchet wheel, a gear wheel is mounted freely on
oscillating shaft. These gear wheels are accompanied a set of spring loaded pawls placed on
projected flanges, engaging the ratchet wheels. The oscillations on ratchet wheels are
independently turned in to continue rotary motion and are brought to a single output shaft.
OSCILLATIONS CONVERTER
Oscillations conveyed by the oscillating shaft are converted in to continue rotary motion by
mechanical means. Therefore power in continues rotary manner is available at shaft.
References
1. Fernando Salazar (2009) INTERNAL COMBUSTION ENGINES, Department of
Aerospace and Mechanical Engineering, University of Notre Dame, Notre Dame, IN
46556.
2. Oledzki, W., "About a New Conception of Internal Combustion Engine Construction II.
Oscillating Engines," SAE Technical Paper 2009-01-1456, 2009, doi:10.4271/2009-01-
1456
3. Kevin Anderson, Steve Cunningham, Martin Stuart, Chris McNamara,” Polygon
oscillating piston engine”, International Journal of Engineering Inventions e-ISSN:
2278-7461, p-ISSN: 2319-6491 Volume 4, Issue 10 [June 2015] PP: 12-17
INFLUENCE May 2016
13. Bose Electromagnetic Suspension
Manoj M. Jadhav
Asst. Professor
1. Introduction In 1980, Bose founder and CEO Dr. Amar Bose conducted a mathematical study to
determine the optimum possible performance of an automotive suspension, ignoring the
limitations of any existing suspension hardware. The result of this 5-year study indicated that it
was possible to achieve performance that was a large step above anything available. After
evaluating conventional and variable spring/damper systems as well as hydraulic approaches, it
was determined that none had the combination of speed, strength, and efficiency that is
necessary to provide the desired results. The study led to electromagnetic as the one approach
that could realize the desired suspension characteristics. The Bose suspension required
significant advancements in four key disciplines: linear electro-magnetic motors, power
amplifiers, control algorithms, and computation speed. Bose took on the challenge of the first
three disciplines and bet on developments that industry would make on the fourth item.
2 . The Component Used In Bose Suspention System
2.1 Linear Electromagnetic Motor:
A linear electromagnetic motor is installed at each wheel of a Bose equipped vehicle.
Inside the linear electromagnetic motor are magnets and coils of wire. When electrical power is
applied to the coils, the motor retracts and extends, creating motion between the wheel and car
body. One of the key advantages of an electromagnetic approach is speed.
Fig. Linear Electromagnetic Motor Mounted On Wheel
The linear electromagnetic motor responds quickly enough to counter the effects of bumps and
potholes, maintaining a comfortable ride. Additionally, the motor has been designed for
maximum strength in a small package, allowing it to put out enough force to prevent the car
from rolling and pitching during aggressive driving maneuvers. The Bose linear electromagnetic
INFLUENCE May 2016
Department of Mechanical Engineering Page | 47
motor offers easy two-point mounting. The only electrical connections to the motor are for
power and control.
2.2 Linear motor
Linear motor is essentially a multi-phase alternating current (AC) electric motor that has
had its stator "unrolled" so that instead of producing a torque (rotation) it produces a linear force
along its length. Many designs have been put forward for linear motors, falling into two major
categorieslow-acceleration & high-acceleration linear motors. Low-acceleration linear motors
are suitable for maglev trains and other ground-based transportation applications
Fig. Linear motor
High-acceleration linear motors are normally quite short, and are designed to accelerate an
object up to a very high speed and then release the object, like roller coasters.
2.3 Power Amplifier
The power amplifier delivers electrical power to the motor in response to signals from the
control algorithms. The amplifiers are based on switching amplification technologies pioneered
by Dr. Bose at MIT in the early 1960s – technologies that led to the founding of Bose
Corporation in 1964. The regenerative power amplifiers allow power to flow into the linear
electromagnetic motor and also allow power to be returned from the motor. For example, when
the Bose suspension encounters a pothole, power is used to extend the motor and isolate the
vehicle’s occupants from the disturbance. On the far side of the pothole, the motor operates as a
generator and returns power back through the amplifier. In so doing, the Bose suspension
requires less than a third of the power of a typical Vehicle’s air conditioning system
2.4 Control Algorithms
The Bose suspension system is controlled by a set of mathematical algorithms developed
over the 24 years of research. These control algorithms operate by observing sensor
measurements taken from around the car and sending commands to the power amplifiers
installed in each corner of the vehicle. The goal of the control algorithms is to allow the car to
glide smoothly over roads and to eliminate roll and pitch during driving
INFLUENCE May 2016
Fig. Control Algorithms of Active Suspension
3. Working:
The Bose system uses a linear electromagnetic motor (L.E.M.) at each wheel, in
lieu of a conventional shock and spring setup. The L.E.M. has the ability to extend (as if into a
pothole) and retract (as if over a bump) with much greater speed than a fluid damper (taking just
milliseconds). These lightning-fast reflexes and precise movement allow the wheel's motion to
be so finely controlled that the body of the car remains level, regardless of the goings-on at the
wheel level.
Fig Arrangement of Bose Suspesion
The L.E.M. can also counteract the body motion of a car while accelerating, braking and
cornering, giving the driver a greater sense of control and passengers less of a need for
Dramamine. To further the smooth ride goal, wheel dampers inside each wheel hub smooth out
small road imperfections, isolating even those nuances from the passenger compartment.
Torsion bars take care of supporting the vehicle, allowing the Bose system to concentrate on
optimizing handling and ride dynamics.
A power amplifier supplies the juice to the L.E.M.s. The amplifier is a regenerative
design that uses the compression force to send power back through the amplifier. Thanks to this
efficient layout, the Bose suspension uses only about a third of the power of a vehicle’s air
INFLUENCE May 2016
conditioning system. There are a few other key components in the system, such as control
algorithms that Bose and his fellow braincases developed over a few decades of crunching
numbers. The target total weight for the system is 200 pounds, a goal Bose is confident of
attaining.
4. Vehicle Performance:
Vehicles equipped with the Bose suspension have been tested on a variety of roads and
under many different conditions, demonstrating the comfort and control benefits drivers will
encounter during day to-day driving. In addition, the vehicles have undergone handling and
durability testing at independent proving grounds. When test drivers execute aggressive
cornering maneuvers like a lane change, the elimination of body roll is appreciated immediately.
Similarly, drivers quickly notice the elimination of body pitch during hard braking and
acceleration. Professional test drivers have reported an increased sense of control and confidence
resulting from these behaviors. When test drivers take the Boss suspension over bumpy roads,
they report that the reduction in overall body motion and jarring vibrations results in increased
comfort and control.
4.1 On A Bumpy Surface
Two vehicles of the same make and model are driven over a bump course at night. The
vehicle on the top has the original factory installed suspension and the vehicle on the bottom has
a BOSE suspension system. Both vehicles are being driven at the same speed. The lexus with
the standard factory installed suspension (below).
Joggles as it coasts along a bumpy surface, while another Lexus with the BOSE suspension
system (below) sails along the same road unperturbed.
Fig. A Lexus with the Bose system
INFLUENCE May 2016
4.2 Body Roll While Cornering:
Two vehicles of the same make and model are shown performing an aggressive cornering
maneuver.
Fig. Lexus with a standard suspension Fig.A Lexus with the BOSE suspension
In the photo, the Lexus car without the BOSE system leans as it turns a corner, while the car
with the Bose system remains stable.
5.Features:
The system draws about two horsepower or one-third the load of a typical air
conditioner. While it can exert 50 kilowatts (67 horsepower) of energy to leap a
2x6(plank) covers 49 kilowatts cushioning the landing, with the shocks working like
generators.
Torsion bars and shock units weigh about what two conventional springs and shocks.
The system lets a vehicle ride lower at highway speeds to produce less drag and
improve handling
To save power the system is regenerative. When the far side of a pothole helps to push
the wheel up almost all the power is recovered.
6.Future Prospects:
Dr. Bose stated that within five years the company hopes to have the Bose
suspension offered on one or more high-end luxury cars, and thanks to the system's modular
design, it shouldn't be much of a problem to install at the factory. A manufacturer will be chosen
to co-develop a production application for sale after three or four years. GM is expected to be
the first development partner, given the long relationship between the companies.
The biggest setback would be the cost as it is going to cost more than any suspension
does now. The neodymium iron in the magnets is the most expensive part. Expect to see
electromagnetic suspensions only on very expensive cars first, and probably never on cheap
ones, though we imagine that the cost would come down as production goes up.
INFLUENCE May 2016
14. RAPID PROTOTYPING SYSTEMS: CLASSIFICATION AND
ADVANTAGES
Manojkumar M. Salgar
Asst. Professor
Classification of rapid prototyping systems:
One of the broadest ways of classifying Rapid Prototyping (RP) system is by the initial
form of their row material. On this basis they are classified as,
1. Liquid –Based Processes- Liquid based RP systems begin with their material in liquid state.
Through a process commonly known as curing, the liquid is converted into solid state. The
important process under this category is, Stereolithography (SLA) and Solid Ground Curring
(SGC).
2. Solid-Based Processes- The material in these processes can either in the form of wire, a roll,
laminates or pallets. The important processes under this category are Laminated Object
Manufacturing (LOM), Fused Deposition Modeling (FDM).
3. Powder-Based Processes-These processes use powder in grain-like form. Principal amongst
these processes are Selective Laser Sintering (SLS) and Three Dimensional Printing (3DP).
KEY ASPECTS OF RAPID PROTOTYPING:
Fundamentally, the development of RP can be seen in four primary areas. The Rapid
Prototyping Wheel in Figure (4.1) depicts these four key aspects of Rapid Prototyping. They are:
Input, Method, Material and Applications.
FIG.(1) Rapid prototyping wheel depicting the four major aspects of RP.
1.Input-
Input refers to the electronic information required to describe the physical object with 3D data.
There are two possible starting points —a computer model or a physical model. The computer
INFLUENCE May 2016
model created by a CAD system can be either a surface model or a solid model. On the other
hand, 3D data from the physical model is not at all straightforward.
2.Method-
While they are currently more than 20 vendors for RP systems, themethod employed by each
vendor can be generally classified into the following categories: photo-curing, cutting and
glueing/joining, melting and solidifying/fusing and joining/binding. Photo-curing can be further
divided into categories of single laser beam, double laser beams and masked lamp.
3.Material-
The initial state of material can come in either solid, liquid or powderstate. In solid state, it can
come in various forms such as pellets, wire or laminates. The current range materials include
paper, nylon, wax, resins, metals and ceramics.
4.Applications-
Most of the RP parts are finished or touched up before they are usedfor their intended
applications. Applications can be grouped into (1) Design (2) Engineering, Analysis, and
Planning and (3) Tooling and Manufacturing. A wide range of industries can benefit from RP
andthese include, but are not limited to, aerospace, automotive, biomedical, consumer, electrical
and electronics products.
Advantages of Rapid Prototyping-
1. Direct Benefits-
The benefits to the company using RP systems are many. One would bethe ability to experiment
with physical objects of any complexity in a relatively short period of time. It is observed that
over the last 25 years, products realized to the market place have increased in complexity in
shape and form. For instance, compare the aesthetically beautiful car body of today with that of
the 1970s. On a relative complexity scale of 1 to 3 as seen in Figure 5.1, it is noted that from a
base of 1 in 1970, this relative complexity index has increased to about 2 in 1980 and close to 3
in the 1990s.
To the individual in the company, the benefits can be varied and have different impacts. It
depends on the role in which they play in the company. In Figure (5.2), the activities required
for full production in a conventional model are depicted at the top. At the bottom of Figure (c)is
the RP model. Depending on the size of production, savings on time and cost could range from
50% up to 90%!
Fig. Project time and product complexity in 25 year time frame
INFLUENCE May 2016
Fig. Result of integration of RP technologies
1.1. Benefits to Product Designers-
The product designers can increase part complexity with little significant effects on lead time
and cost. More organic, sculptured shapes for functional or aesthetic reasons can be
accommodated. They can optimize part design to meet customer requirements, with little
restrictions by manufacturing. In addition, they can reduce parts count by combining features in
single-piece parts that are previously made from several because of poor tool accessibility or the
need to minimize machining and waste. With fewer parts, time spent on tolerance analysis,
selecting fasteners, detailing screw holes and assembly drawings is greatly reduced.
1.2. Benefits to the Tooling and Manufacturing Engineer-
The main savings are in costs. The manufacturing engineer canminimize design, manufacturing
and verification of tooling. He can realize profits earlier on new products, since fixed costs are
lower. He can also reduce parts count and, therefore, assembly, purchasing andinventory
expenses.
2. Indirect Benefits
Outside the design and production departments, indirect benefits can also be derived. Marketing
as well as the customers will also benefit from the utilization of RP technologies.
2.1. Benefits to Marketing-
To the market, it presents new capabilities and opportunities. It can greatly reduce time-to-
market, resulting in (1) reduced risk as there is no need to project customer needs and market
dynamics several years into the future, (2) products which fit customer needs much more
closely, (3) products offering the price/performance of the latest technology, (4) new products
being test-marketed economically.
References-
1. Jurgen Stampfl, Hao-Chi Liu, Seo Woo Nam, “Rapid Prototyping of Mesoscopic
Devices”, Proceedings Micromaterials 2000. Berlin, April 2000.
2. Karunakaran K P, “ Rapid Prototyping and Tooling”, Internal Circulation IIT Bombay.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 54
3. JurgenStampfl, Hao-Chi Liu, Alexander Nickel., “ Rapid prototyping and manufacturing
by gelcasting of metallic and ceramic slurries”, Material Science and Engineering,
Elsevier.
4. Wheelwright, S.C. and Clark, K.B., Revolutionizing Product Development:Quantum
Leaps in Speed, Efficiency, and Quality, TheFree Press, New York, 1992.
5. Ulrich, K.T. and Eppinger, S.D., Product Design and Development,2nd edition, McGraw
Hill, Boston, 2000.
6. Hornby, A.S. and Wehmeier, S. (Editor), Oxford AdvancedLearner’s Dictionary of
Current English, 6th edition, Oxford UniversityPress, Oxford, 2000.
7. Koren, Y., Computer Control of Manufacturing Systems, McGrawHill, Singapore, 1983.
8. Hecht, J., The Laser Guidebook, 2nd edition, McGraw Hill, NewYork, 1992.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 55
15. Trends in Micro Machining Technologies
Omkar R. Chandawale
Asst. Professor
Over the past several years there has been an increased interest in micro machining technology
that has captured the imagination of every manufacturing and industry segment; from aerospace,
medical appliance and the automotive world, the potential for product miniaturization continues
to grow and while posing numerous technical challenges.
In response to this continued miniaturization, companies are developing new technologies to
meet the unique challenges posed by micro manufacturing and must develop appropriate
machining systems to support this growth.
The manufacture of miniature parts is not new. Many companies have used various machining
technologies such as EDM and laser to produce micro details for many years. The difference
today is the shear volume of products that require micro machining. The accelerated rate of
change is unbelievable.
New miniature products are changing how we view the world. Many manufacturers’ are
developing micro machining technologies and techniques to support this growth. Companies are
looking for parts having feature sizes of less than 100 microns, or somewhat larger than a human
hair. At this scale, the slightest variation in the manufacturing process caused by material or
cutting tool characteristics, thermal variations in the machine, vibration and any number of
minute changes will have a direct impact on the ability to produce features of this type on a
production scale. In response to this continued miniaturization, companies are developing new
products and technologies to meet the unique challenges posed by micro manufacturing.
These types of tolerances are mind boggling and would have been unthinkable just a few years
ago. Talk of using end mills of 0.002" (50 µm) diameter and EDM wire diameters of 0.00078"
(20 µm), or electrodes smaller than a few tenths is becoming more commonplace. The
application micro-meso machining technologies are being employed in the manufacture of a
wide variety of products and devices.
Medical Components
Micro Molds
Electronic Tooling
MEMS (Micro-Electrical-Mechanical-System)
Fluidic Circuits
Micro-Valves
Particle Filters
Sub miniature Actuators & Motors
INFLUENCE May 2016
The trends in the ultra-high accuracy and micro-miniature manufacturing fields require a fresh
look at new machine technology and process techniques.
DEFINING MICRO~MESO MACHINING TECHNOLOGY
The term micro machining has a variety of definitions, depending upon whom you are
speaking too. Micro machining simply means small or miniature to many of us in
manufacturing. Those in academia and research define Micro in a very literal way, or 10-6
, in
other words, as one-millionth of a meter. (A micron n: a metric unit of length equal to one
millionth of a meter [syn. – micrometer. Origin - Greek m kro-, from m kros, small] quite
literally as 106 (mm)], We need to further define, for purposes of clarification, what all of this
means to us in manufacturing based on the part or feature size. Figure 2 illustrates the range of
part and feature size machining capability. Parts with machined features below <0.004" (100
µm) fit into the middle (e.g. - meso) range of manufacturing.
The chart below provides a basic reference of the micro machining range of various machining
technologies.
CHALLENGES OF MICRO MACHINING SYSTEM DEVELOPMENT
Maintaining control of all of the machining variables, including the machine tool, work and tool-
holding, the environment, cutting tools or electrodes will all have a huge cumulative effect on
the end result.
With such small parts and feature sizes, accuracy takes on a completely new meaning. For
example, ±0.0002 tolerance is very different if the feature being machined is 0.200" vs. only
0.002" in size. For this reason, it becomes necessary to re-think the meaning of precision.
There are several key areas of concern when machining details this small.
1. Environmental changes that impact Accuracy; process predictability and repeatability
INFLUENCE May 2016
2. Vibration (Internal and External)
3. Part Management
4. Cutting Fluids and Fluid Dynamics
Machine resolution, control, construction and ancillary tools all become much more critical to
the success in producing micro parts.
Besides being able to machine micro features and parts, simply handling micro parts and tools
will pose unique challenges. This unfortunately would have an impact on the repeatability of a
process that has a desired tolerance of less than a micron.
USING NANOMETER RESOLUTION
Although high resolution feedback systems have improved accuracy and eliminate servo drift
there is more that need to be done. Figure 4 demonstrates the difference in servo drift between
1µ and 0.1µ feedback resolution.
It is also no longer good enough to have feedback systems and resolutions only in the micron
range. When machining in the micro machining range, sub-micron control and feedback
systems are necessary.
Due to this cumulative effect of errors, a good rule of thumb is that the systems employed for
manufacturing should be 10 times more accurate than the repeatable tolerance desired. This
would mean that to achieve a ±0.00020" (±5 micron) tolerance would require a system that
would provide at least 0.000020" (0.5 micron) precision.
Feedback resolutions in the 10 ~ 50 nanometer range are now available to improve "resolution"
accuracy on new machine tools, however this does not necessarily make a machine accurate.
Resolution is the digital accuracy of the machine tool and does not correct for alignment
problems.
INFLUENCE May 2016
Alignment is critical in producing accuracy, and depending upon the number of axes that are
combined in a system, the perpendicularity, parallelism and straightness of the positioning axes
to each other are just as critical to precision as the resolution. Figure 5 shows the volumetric
accuracy of a Vertical Machining Center.
ENVIRONMENTAL CONTROL
We all know that the shop environment can change during the day. Even a single degree of
change will affect accuracy when machining in the submicron level.
Simply monitoring the room is not enough. Structural changes caused by temperature variations
are affected by the mass of the machine. The rate of change (convection) will vary from system
to system based on mass.
For this reason, it makes sense to incorporate a machine thermal enclosure to maintain the
machine in its own controlled environment.
In Figure 6, demonstrates the thermal stability that can be achieved with a multi-layer thermal
enclosure that combines insulation with a layer of air. Air temperature monitoring and flow is
controlled to maintain uniform temperature distribution.
INFLUENCE May 2016
Direct monitoring of minute temperature variations that occur in the machining system can be
accomplished through the use of monitors (thermistors) embedded in the machine frame or
casting.
SUBMICRON TOOLING
Precision work and tool (electrode) holding systems capable of positional repeatability in
the 1 micron range have been commercially available since the late 1980’s. Originally
designed for the EDM industry, these systems are highly reliable, even in the harsh EDM
environment, and are now found on virtually every type of machining system.
From machining centers to grinders, to coordinate measuring machines, these holding systems
can be found in every type of manufacturing environment.
Due to the demands for better accuracy required for Micro Machining, new innovations in
tooling with submicron repeatability are necessary.
Figure 8 (Courtesy of Erowa Technologies) demonstrate the advances in work holding
technology being developed for the micro machining market.
INFLUENCE May 2016
According to the manufacturer, this new tooling system is twice as accurate as previously
available systems.
MICRO MILLING
Micro machining using conventional technologies, such as milling present unique challenges in
manufacturing. Cutting forces and tool pressures when using micro tools create a whole new
realm of problems. Any variation in axis position during the cut can be disastrous.
The spindle must be stable and minimize thermal expansion, tool change variation and
vibration. Any vibration or run-out at the tool tip will adversely influence the surface finish and
accuracy.
One of the problems associated with micro milling is the amount of foce associated with
removing material at the particulate level.
One solution developed several years ago is a direct tool-change type spindle. By eliminating
the use of a tool holder, it is possible to reduce total run-out caused by tool holder variation and
is ideal for micro machining due to the elimination stack up issues.
INFLUENCE May 2016
Whether using a direct type, or one that uses a tooling system such as HSK, the spindle must be
stable and minimize thermal expansion, tool change variation and vibration.
The method of cooling and lubricating the spindle will have a direct impact on spindle growth
and movement during the machining process. The Inside – Out cooling provided by Spindle
Core Cooling takes advantage of centrifugal force to move cooling fluid though the spindle.
Traditional laser tool measurement systems have shown some limitations when measuring
cutting tools below .020" (0.5 mm) common when machining micro parts.
New hybrid ATLM systems combines the benefits of touch and non-contact measuring in order
to verify tool tip position in the micron range regardless of spindle thermal expansion.
Figure 11 demonstrates the submicron precision obtained over a four hour period even when
completing multiple tool changes.
INFLUENCE May 2016
MICRO WIRE EDM
ED machining has been a mainstay of manufacturing for more than 50 years, providing unique
capabilities to the job shop and manufacturer alike.
Miniature parts and components have been produced for many years using the non-contact
capabilities of EDM systems. In fact, Life magazine published a photograph in the early sixties
of a series of micro holes that were EDM through a needle and spelled out the magazines name
"LIFE".
The ability to automatically wire thread and machine parts with a 20 µ (0.00078") wire and
achieve corner radii of less than 15 µ would have been unthinkable just a few years ago.
The problems of small hole threading and hole proximity can now be conquered.
A key to this technology is the horizontal inclination of the Wire and the utilization of air and
vacuum rather than a fluid to thread the Wire. This is a radical departure from conventional
designs. Machining in the horizontal plane provides several advantages including an integrated
"C" axis for work holding and automated part loading with slug removal systems that improve
automation.
INFLUENCE May 2016
The open design of a V type wire guide improves automatic Wire threading reliability. The
guide controls the cutting tool and must therefore be highly accurate. The V type guide provides
3-Point contact with the wire for superior wire alignment.
As discussed earlier, an environmental system for precise temperature control (within ± 0.5º C)
must be used when machining ultra-fine details.
Lastly, an Oil dielectric instead of de-ionized water in order to provide -
A smaller spark gap due to the added insulation strength of Oil.
Oil produces superior surfaces due to the quenching characteristics
Oil Eliminates the problems of rust during long unattended operation
MICRO EDM SINKING
Today’s CNC die sinking EDM system is much more capable than their manual
predecessors for conventional mold manufacturing and micro machining of complex parts
for a wide range of applications.
When talking about machining in the micron range it is necessary to look at machine design and
construction, as well as the ability to produce parts efficiently in a production environment.
Successful EDM’ing requires minute orbital motion in order to achieve the speed and surface
finish to make micro machining economical.
Machining of micro holes has become an immensely important micro machining application.
From micro start holes for newer wire EDM’s to production ED drilling of small holes,
EDM’ing has produced tremendous results. Figure 11 pictures a Silver Tungsten electrode that
was ED dressed in the EDM machine to 6-µm diameter for producing an 11 µm (0.00043")
diameter hole.
INFLUENCE May 2016
Holes having an L to D of 100:1 are possible by incorporating a "High Pressure" dielectric
pumping (>800 psi) system and using micro tubing down to .1mm. holes.
High-pressure seals in the rotating head and high precision rotation are necessary as well.
CONCLUSIONS
Micron and sub-micron manufacturing requirements will continue to grow offering unique
challenges and immense opportunities to a wide group of manufacturers. The designs and
construction of many machine tools, work and tool holders, cutting tools and electrodes will
naturally evolve as greater demands are placed on them when machining these miniature parts.
Many of the challenges will evolve around economically controlling the micro manufacturing
process. Many of these systems would not have been developed had it not been for the demands
of industry for more and more capability.
For these systems to perform successfully in the "Real World" requires cooperation and
imagination on everyone’s part.
In the end, it is the user that challenges the expertise of the OEM to develop effective micro
machining systems, processes and applications techniques to support the business.
INFLUENCE May 2016
16. Sensing Artificial Skin
Pradeep B. Patil
Asst. Professor
Advances in the development and sophistication of prosthetic limbs are numerous. Researchers
continue, however, to work on sensory needs: How to make an artificial limb feel objects and
allow a user to react to touch. Now, a team of Stanford University researchers is moving closer
to that goal, developing artificial skin made of very thin plastic layers that detects pressure
through a newly developed sensor, which then delivers an electric signal to neurons.
Image 1: A new sensor placed on an artificial hand allows a user to feel pressure and touch
To communicate information or stimulus in a way the brain can understand is a main intension
of research. This is the first time a flexible, skin-like material has been able to detect pressure
and also transmit a signal to a component of the nervous system. The work involves three
components: the sensor, a flexible circuit to transmit electronic signals and a neurological
recognition of the signal.
Image 2: Sensor & circuits on two-ply plastic “skin” using an inkjet process.
The sensor is comprised of two thin rubber-based plastic layers. The top layer creates a sensing
mechanism based on earlier work that showed how to use the "spring" of the molecular
structures of plastics and rubber, the components of the sensor. The sensitivity was increased by
INFLUENCE May 2016
Department of Mechanical Engineering Page | 66
fabricating the plastic with a waffle pattern, which compresses the plastic’s molecular springs
even further. The team then added billions of carbon nanofibers throughout the plastic. This
increases pressure sensitivity even more as the nanofibers are pushed closer together during
touch, enabling them to conduct electricity.
Human skin transmits pressure information in short electrical pulses. By increasing pressure on
the waffled plastic “skin,” the nanotubes are pushed even closer together. Electric pulses ebb
and flow to the sensor as pressure changes.
Moving the electric pulses to nerve cells is the next step. To accomplish that it made by using
the firm’s inkjet technology to print flexible circuits onto plastic. For sensitive artificial skin to
become practical, a large number of sensors must be able to be deposited over a large surface.
This work demonstrated its practicality.
But the electronic signal still must be recognized by the brain through a biological neuron to
complete the chain. Researchers worked in optogenetics, which combines optics and genetics.
Cells are bioengineered to make them sensitive to light frequencies. Light pulses can then be
used to switch cellular processes on and off.
The team engineered neurons to represent the human nervous system and translated electronic
impulses from the artificial skin into light pulses. Those pulses then activated the neurons,
closing the loop and showing the artificial skin can mimic the working of human skin.
“There’s a long way to go to get to full functionality of [real] skin, noting there are several
sensing mechanisms in the human hand, including temperature and texture. Here it is
demonstrated that a single pressure receptor acts in a certain way. But to be able to fully mimic
skin require implementing a number of different sensors. Its approach lends itself to adding
sensors for other sensations as they may be developed, supported by inkjet fabrication to lay
down a network of sensors over a plastic layer that would cover a prosthetic hand, or arm.
However, the density of receptors in real skin is very high.
References
1.D.De Rossi, A. Nannini “Artificial sensing skin mimicking mechanoelectrical conversion
properties of human dermis”, IEEE Transactions on Biomedical Engineering, ISSN 0018-9294,
Volume:35, Issue: 2,2002,page 83-92.
2. Vladimir J. Lumelsky, Michael S. Shur, Sigurd Wagner,”Sensitive skin”, IEEE SENSORS
JOURNAL, VOL. 1, NO. 1, JUNE 2001, page no.41-51
INFLUENCE May 2016
Department of Mechanical Engineering Page | 67
3. Yong-Lae Park,Bor-Rong Chen, Robert J.Wood, “Design and Fabrication of Soft Artificial
Skin Using Embedded Microchannels and Liquid Conductors”, IEEE SENSORS JOURNAL,
VOL. 12, NO. 8, AUGUST 2012,page no.2711-2718
4. Giorgio Cannata, Marco Maggiali, Giorgio Metta and Giulio Sandini , “An Embedded
Artificial Skin for Humanoid Robots ”, IEEE SENSORS JOURNAL,2008.
5. Takao Someya , Tsuyoshi Sekitani et al, “A large-area, flexible pressure sensor matrix with
organic field-effect transistors for artificial skin applications”, pnas, vol. 101 no. 27, 2004, page
no.9966–9970.
INFLUENCE May 2016
17. Experimental study of single bubble dynamics during nucleate
pool boiling heat transfer using saturated water and ammonium
chloride Prashant B. Pawar
Asst. Professor
INTRODUCTION
Boiling enhancement observed is due to the change in the thermo-physical properties of the
aqueous solution. It is observed that as ammonium chloride in pure water, surface tension of the
mixture considerably reduces.
Table 1: Physiochemical properties of ammonium chloride
Chemical
formula
Ionic
Nature Form
Molecular
Weight
(g/mol)
NH4Cl Anionic Crystalline 53.49
The objective of the study is to compare the bubble growth in pure water to that of surfactant
additive solution with negligible environmental effect. The surfactant used was ammonium
chloride, and base liquid as pure water.
EXPERIMENTAL METHOD
A. Experimental Set-up
Fig. 1 Experimental set-up
Experimental set-up consists of borosilicate glass container of 2 litres capacity. DISPO VAN®
hypodermic needle which is normally used in medical application was used as a heating surface.
A schematic diagram of the experimental setup is shown in Fig. 1.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 69
B. Needle Size
Table 2: Needle Size
Needle Nominal outer Diameter Nominal Inner Diameter Depth
Gauge mm Uncertainty (mm) mm Uncertainty (mm) mm
20 0.908 ±0.0064 0.603 ±0.019 25
21 0.819 ±0.0064 0.514 ±0.019 25
RESULT AND DISCUSSION
A. Saturated Water at 430 kW/m2
In nucleate boiling region, at 430 kW/m2, bubble departure diameter from single artificial
nucleation site increases with wall superheat.
Bubble release frequency at same heat flux is cannot completely measured due to very slow
nucleation rate of bubble formation. Contact angle at the time of bubble release is measured 510
at every wall superheat.
Bubble departure diameter is measured from 2.14 mm at 9K up to 2.39 mm at 15K wall
superheat. Bubble release frequency is measured from 0.62 Hz up to 4.8 Hz at these set of wall
superheat. Bubble waiting time is measured 0.03 second at each wall superheat temperature.
B. Saturated Water at 900 kW/m2
In nucleate boiling region, at 900 kW/m2, bubble release frequency from single artificial
nucleation site increases with wall superheat. This is due to increasing rate of formation of vapor
inside needle heating surface. Bubble departure diameter at same heat flux is measured almost
same from 1.82 mm up to 2.1 mm. Contact angle at the time of bubble release is measured 470 at
every wall superheat.
Bubble release frequency is measured in between 1 Hz to 3 Hz at these set of wall superheat.
Bubble waiting time is measured 0.03 second at each wall superheat temperature.
C. Ammonium Chloride as surfactant in saturated water at 2200 ppm at 601 kW/m2
In nucleate boiling region, at 601 kW/m2, bubble departure diameter at same heat flux is
measured between 2.42 mm to 2.67 mm.Bubble waiting time is measured 0.02 second to 0.11
second.
Bubble release frequency is measured from 1.56 Hz to 4.76 Hz at these set of wall superheat.
This increase in bubble release frequency due to increase in rate of formation of vapour nuclei
with wall temperature.
D. Ammonium Chloride as surfactant in saturated water at 2200 ppm at 950 kW/m2
INFLUENCE May 2016
Department of Mechanical Engineering Page | 70
In nucleate boiling region, at 950 kW/m2, bubble departure diameter at same heat flux is
measured between 2.27 mm to 2.63 mm. Bubble waiting time is measured 0.01 second to 0.03
second.
Bubble release frequency from single artificial nucleation site increases with wall superheat.
This is due to increasing rate of formation of vapor inside needle heating surface. Bubble release
frequency is measured from 2.86 Hz to 4.55 Hz at these set of wall superheat.
CONCLUSIONS
For saturated water, the bubble dynamics was studied using PCO high speed camera operating at
100 frames per second at atmospheric pressure. The hypodermic needles were used of inner
diameters 0.603 mm with a constant depth of 25 mm. The bubble departure diameter increases
with increasing wall superheat at at heat flux 430 kW/m2. At heat flux 900 kW/m
2, bubble
release frequency from single artificial nucleation site increases with wall superheat, bubble
departure diameter at same heat flux is measured from 1.82 mm up to 2.1 mm
For ammonium chloride, the effect of heat flux and wall superheat on bubble dynamics during
nucleate pool boiling heat transfer using ammonium chloride was studied experimentally. The
bubble dynamics was studied using PCO high speed camera operating at 100 frames per second
at atmospheric pressure and at heat flux 601 kW/m2 and 950 kW/m2. Single bubble was
generated using right angle tip of a hypodermic needle as a nucleation site. The hypodermic
needles were used of inner diameters 0.514 mm with a constant depth of 25 mm.
The captured images shows that single bubble grew rapidly initially in spherical shape and then
in balloon like shape axi-symmetrically until reaching its maximum size, and then departed from
the right angle tip of needle. At heat flux 601 kW/m2, bubble departure diameter at same heat
flux measured between 2.42 mm to 2.67 mm. Bubble release frequency increases from 1.56 Hz
to 4.76 Hz at wall superheat from 3 K to 20 K.
At heat flux 950 kW/m2, bubble departure diameter at same heat flux measured between 2.25
mm to 2.63 mm. Bubble release frequency increases 1.08 Hz to 4.55 Hz at wall superheat from 3
K to 30 K. Increase in bubble release frequency due to higher rate of vapour formation at
increasing wall superheat at both heat flux 601 kW/m2 and 950 kW/m
2.
References
S. Siedel, S. Cioulachtjian, J. Bonjour, “Experimental Analysis of Bubble Growth, Departure
and Interactions during Pool Boiling on Artificial Nucleation Sites,” Experimental
Thermal and Fluid Science, vol. 32, pp. 1504-1511, 2008.
INFLUENCE May 2016
Department of Mechanical Engineering Page | 71
S. Gajghate, A. R. Acharya, A.T. Pise, “Experimental study of aqueous ammonium chloride
in pool boiling heat transfer”, A Journal of Thermal Energy Generation, Transport,
storage, and conversion, Taylor And Francis, vol. 27:2, pp. 113-123, Nov. 2013.
Najim, A. R. Acharya, A. T. Pise, S. S. Gajghate, “Experimental Study of Bubble Dynamics
in Pool Boiling Heat Transfer using Saturated Water and Surfactant solution,” IEEE -
International Conference on Advances in Engineering and Technology ICAET-2014,
ISBN No.: 978-1-4799-4949-6, 2014