perpetual wireless video sensors for internet of...

163
Perpetual Wireless Video Sensors for Internet of Things Shao-Yi Chien 1,2 and Yen-Kuang Chen 2,3 [email protected] and [email protected] 1 National Taiwan University 2 Intel-NTU Connected Context Computing Center http://ccc.ntu.edu.tw/ 3 Intel Corporation 1

Upload: vutu

Post on 19-Aug-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

Perpetual Wireless Video Sensors for Internet of Things

Shao-Yi Chien1,2 and Yen-Kuang Chen2,3

[email protected] and [email protected]

1National Taiwan University

2Intel-NTU Connected Context Computing Center

http://ccc.ntu.edu.tw/ 3Intel Corporation

1

• Slides/materials are available at – http://media.ee.ntu.edu.tw/research/distributed_video/

– Or http://goo.gl/Ifytc

• Source code is available at

– http://media.ee.ntu.edu.tw/research/opendvc/

– Or http://www.opendvc.org

2

Summary

• IoT will shape the way we live, play, work

• Ultra-low-power wireless video sensor systems will play a critical role

• Many challenges & opportunities – Context-aware distributed video coding

• Video coding complexity vs. wireless transmission cost

– Application-adaptive distributed video analysis • Flexibility vs. efficiency

New research opportunities

3

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding

– Distributed video analysis

• Summary

4

Benefits of Today’s Technologies

• Mobile phone

–Devices

–Wireless communication

• Internet

–Network

– Services

What’s Next? 5

M2M for Better Life by Intel-NTU Connected Context Computing Center

6

Needed: Machines to work together

My Dream of Future http://youtu.be/ujk1cprLpD8

8

Social Web of Things http://youtu.be/i5AuzQXBsG4

9

Social Web of Things http://youtu.be/i5AuzQXBsG4

10

Game Changing Capabilities

• Sensing – Connected embedded sensors help us “hear/see” things that we could

not hear/see in the past. – Enhance our sensory by sensors

• Robotics – Machinery can be much stronger and more precise than human – Enhance our capabilities by robotics

• Communication – Things are connected to networks constantly – Enhance our collaboration by wireless and broadband networks

• Analysis – Sensor devices produce sea of data that can be transformed into

knowledge and intelligence – Enhance our brain by big data model and machine learning

11

Internet of Things (aka Machine-to-Machine)

• Definition – Smart devices will collect data – Relay information or context to each other – Process the information collaboratively – Prompt human or machine for further actions

• Status today

12

Imagine Everything was Linked

http://youtu.be/igsJxXMssGA

http://youtu.be/__pkZ4-kNvY

http://youtu.be/tvpRJrQ8Z2c

13

Intelligent signage & shopping recommendation

Smart entertainment

Personal Enterprise Government

Save money and time

Avoid unhappiness

StayHealthy

Feel special

Chronic disease management

Food & drug tracing and authentication

Health monitoringAssisted living

Environmental conservation

Factory safetyVehicle safety

Building safety & security

Emergency response system

Natural disaster warningInfrastructure monitoringHomeland securityUnmanned defense

Efficient energy & water generation & consumptionSmart metering and billing

Efficient natural resource mining & transportation

Traffic managementIntelligent buildingEconomical agriculture & breedingSupply chain automationFleet managementFactory automationGood/product/shipment tracking

Smart homeSmart route planning

Significantly Improve our Life

14

Smart Agriculture

Present work

Sticky paper Conventional greenhouse

Automatic greenhouse Automatic monitoring

Automatic plant factory Real-time pest management

Automatic Greater output Accurate

15

Personal Healthcare System Healthcare Devices Network Information System

E-mail History Analysis

Interface Record

Response Center Personal Healthcare Cloud

Device Certification

IEC 60601-1

IEC 60601-1-2

16

Eco-House System

Different users on different activities wants different illumination & temperature ◦ Save energy while

satisfying users’ requirements

Sensor ◦ Fixed sensor ◦ Portable sensor

Light ◦ Whole lightening device ◦ Local lightening device

Temperature ◦ Air conditioner

Control server

Whole lighting device

User

Fixed sensor

Local lighting device

Sink

G1 G2 G3 G4 G5

G6 G7 G8 G10G9

G11 G15G14G13G12

G16 G18G17

G21 G23G22

G20G19

G25G24A

B

Air conditioner

Portable sensor

IoT will shape the way we live, play, work 17

IEEE Special Issues

IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) *Low-Power, Reliable, and Secure Solutions for Realization of Internet of Things* 18

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding

– Distributed video analysis

• Summary

19

History • 1999

– Kevin Ashton, the Auto-ID Center, MIT, coined the term, Internet of Things

• 2003-2004 – The term was mentioned in main-stream publications like The

Guardian, Scientific American and the Boston Globe.

• 2005 – ITU published its first report on the topic.

• 2006-2008 – Recognition by the EU, and the First European IOT conference is held

• 2008 – U.S. National Intelligence Council listed the Internet of Things as one of the 6

"Disruptive Civil Technologies" with potential impacts on US interests out to 2025.

• 2008-2009 – According to Cisco, the Internet of Things was born in between 2008 and 2009

when more “things or objects” were connected to the Internet than people.

* Source: http://postscapes.com/internet-of-things-history 20

Famous Projects

• IBM, Smart Planets, 2008

• Japan, Ubiquitous Network Strategy, 2004

• Sensing China, 2009

• HP, CENSE Network, 2009

21

Terminology

• Is Internet-of-Things or machine-to-machine just another fancy name of wireless sensor network or cyber-physical systems?

22

Definition Wireless Sensor Network

Cyber-Physical System

Internet of Things and M2M

Devices to sample physical data

Homogeneous, often inexpensive or tiny sensors

Dedicated high-precision sensors

Sensors of classes, devices with built-in sensors

Devices/services to collect data and make decision

Server or gateway devices

Server or dedicated computing services

Server, or cloud computing services

Device/services to receive commands

WSN base station Custom-designed controllers and actuators

mobile device or machines with actuators

Communications between devices and services

Application-specific network protocols (LAN/WAN)

Dedicated network (LAN/WAN)

Internet or commercial network (WAN)

Example application Environmental sensors

Hospital operation room

Traffic control, smart meter/home 23

Wireless Sensor Network

Source: http://www2.ece.ohio-state.edu/~ekici/res_wmsn.html

24

Cyber-Physical Systems

Source: http://technorati.com/technology/it/article/cyber-physical-systems-the-new-wave/

25

Internet of Things

Source: http://casestudies.q3tech.com/case-studies/machine-to-machine.html 26

Challenges in IoT: A New Service Paradigm

Some data from: GreatWall Strategy Consultants

Application Technology, Standardization

Service

2011 2013 2015 2020

Public Sector Enterprise

Home and Personals

27

Technical Challenges

• Machines work for people frictionlessly & robustly

• Standard interface to foster innovation in the ecosystem Service

• Answers are computed ahead of the questions

• Optimum distribution of device & cloud intelligence Computation

• Zero effort to connect large, dense populations of stationary and moving devices with high energy efficiency

• Complete data security and privacy Communication

• Low-power so that no need to change battery

• “Zero-touch” to deploy and manage devices Sensors

32

Opportunities in Sensing • Low power smart device with

embedded wireless capability – Low power sensing

– Low power pre-processing

– Low power TX/RX

– Energy harvesting

• Common programming platform across different sensors – Ease-of-development before deployment

– Ease-of-reprogramming after deployment

– Self-configuration, optimization, healing, and protection

Service

Computation

Communication

Sensors

33

Opportunities in Communication

• More reliable, faster network for denser, faster-moving sensors with lower power across different protocols

• Automatic, seamless, persistent, and end-to-end data security

Service

Computation

Communication

Sensors

34

Opportunities in Computation

• Analytic model that can process immense amount of heterogeneous data into proper context

– Stream processing

– Anomaly detection

Service

Computation

Communication

Sensors

Wisdom

Knowledge

Information

Data Sensors

Useful service

35

Opportunities in Service

• Machines work for people not vice versa

• The success of M2M does not only depend on technology, but also service

– Data centric, but user friendly

• Standardization & ecosystem

– A standard for everyone to follow is critical for future large-scale M2M deployment

– Major M2M standards still under development; emerging applications are using their own standards

Service

Computation

Communication

Sensors

36

Ultra-low-power, Ubiquitous Sensing Technologies • Low power smart device

with embedded wireless capability

• Devices use very low power so that no need to change batteries

• Minimal maintenance needed

Low-power RF transceiver

Energy harvesting

Low-power sensing

Raw data Key information

Aggregator

Low-power analysis & compression

Transmit

Harvest

37

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding

– Distributed video analysis

• Summary

38

Video Cameras in IoT Applications?

• The growth of distributed video sensor deployment

– Surveillance camera

– Mobile phones

– Video sensors on cars

– Distributed sensors

• M2M network with video sensor nodes ---

Eyes of M2M Networks

39

40

Distributed Video Sensors

• Smart cameras: more computing power integrated in each camera

• Wide applications

– Video surveillance

– Intelligent transportation systems

– …

Internet

41

Technology Driving Forces

• Challenges – Tremendous bandwidth – Huge computation requirement – Difficulty and cost for deployment

• Technology driving forces – VLSI technology – Advanced computer vision and video analysis algorithms – Advanced coding algorithms – Large-scale data analysis – Energy harvesting technology – Low-power data transmission systems – …

42

Technology Driving Forces

• Ultra-low cost/power video sensor

• Perpetual video cameras

43

Different from Surveillance Network?

44

Source: VIVOTEK Inc.

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding

– Distributed video analysis

• Summary

45

Power Analysis Platforms

• Processor based platform – SoCle MDK3D

– SoC with ARM11 processor

– X264 codec

– QCIF 15fps

• ASIC based platform – 65nm technology

– Based on NTU H.264 encoder (ISSCC2005)

– VGA 30fps

Ref: S.-Y. Chien, T.-Y. Cheng, S.-H. Ou, C.-C. Chiu, C.-H. Lee, V. S. Somayazulu, and Y.-K. Chen, "Power Consumption Analysis for Distributed Video Sensors in Machine-to-Machine Networks," IEEE Journal on Emerging and Selected Topics in Circuits and Systems., vol. 3, no. 1, March 2013.

46

Measurement Environment

47

Measurement Environment

48

Power Analysis Results: Processor-Based Platform

• Total: 2.49W (H.264 Intra)

Ps: sensor power, Pc: coding power, Pt: transmission power

3%

97%

0%

Ps

Pc

Pt

49

Power Analysis Result: ASIC-Based Platform

• Total: 106.89mW (H.264 Intra)

Ps: sensor power, Pc: coding power, Pt: transmission power

67% 6%

27%

Ps

Pc

Pt

50

Essentials of Energy Harvesting

• The growing demands for IoT networks

• The power consumption in low power VLSI circuits is going down to tens to hundreds μW – Micro energy harvesting devices are possible to

support their functions

RF Energy 0.1 mW/cm2

Solar Panel 100 mW/cm2

Vibration Piezoelectric 100 mW/cm2

Thermal Electric Seebeck Device

10 mW/cm2 51

Commercialized Product Review

Solar Energy Harvesting Thermal Energy Harvesting Mechanical Energy Harvesting

52

TI Energy Harvesting Development Kit

http://www.ti.com/ww/en/apps/energy-harvesting 53

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding

– Distributed video analysis

• Summary

54

Compression is Necessary!

• For 640x480 RGB 30fps video from 10 video sensors

– 640x480x24x30x10=2.2Tbps!

55

56

time

Image Sequence Model

57

Conventional Hybrid Video Coding Process

I(t-1) I(t)

Reduce Temporal Redundancy

Reduce

Spatial

Redundancy

Reduce

Statistic

Redundancy

58

Basic Video Coding Flow

Encoder

Output

Bitstream DCT Q VLC

IQ

IDCT

Motion

Estimation

Input

Image

-

+

+ +

motion

vector

+

+

Frame

Buffer

Motion

Compensation

)(tI

)(ˆ tI

)(~ te

)1(~

tI

Residue

)(~

tI

)(ˆ)()( tItIte

59

Decoding Block Diagram

VLD

Coded bitstream

IQ IDCT

MC Reconstructed

Frame

+

Video Output

Decoder

Codec=Encoder+Decoder

)1(~

tI

)(~ te

)(ˆ tI

)(~

tI

60

Stage 1 - Reducing Temporal Redundancy

• Segment a frame into macroblocks

• Compensate motion and remove temporal redundancy

• Output energy is related to the degree of temporal redundancy

• This stage is Inter-frame coding

61

Stage 2 - Reducing Spatial Redundancy • Processing the difference frame (spatially

correlated) from stage 1 • Usually using DCT coding • This stage is Intra-frame coder • The method by these two stages is

Hybrid coding method

Conventional Video Coding

• MPEG-1/2/4, H.261/H.263/H.264

62

Inverse Quantization

Transform Quantization

Coding Control

IntraInter

Video

Source

Inverse TransformMotion Compensation

Frame Buffer

Motion Estimation

-

+

Entropy Coding

Predicted Frame

Motion Vectors

Bit Stream Out

Quantized Transformed Coefficients

Deblocking Filter

+

+

+

+

Intra Prediction

Residual Frame

Ref: Shao-Yi Chien, Yu-Wen Huang, Ching-Yeh Chen, Homer H. Chen, and Liang-Gee Chen, “Hardware architecture design of video compression for multimedia communication systems,” IEEE Communications Magazine, vol. 43, no. 8, pp. 122—131, Aug. 2005.

Characteristics of Conventional Video Coding Systems

• Good coding performance

• Complex encoder and simple decoder

• Close-loop coding system

• Not robust over noisy channel

• Suitable for M2M networks?

63

New paradigm --- Distributed Video Coding (DVC)

64

• Distributed compression refers to the coding of two (or more) dependent random sequences.

• Special case of distributed video coding

– Compression with the side information

Ref: B. Girod, A. M. Aaron, S. Rane, and D. Rebollo-Monedero, "Distributed Video Coding," Proceedings of the IEEE, vol. 93, no. 1, Jan. 2005.

Fundamental of Distributed Source Coding

65

• Slepian-Wolf Theorem

– Separate convention encoder

• Rx ≧ H(X) , Ry ≧ H(Y)

– With jointly decoder

• Rx + Ry ≧ H(X,Y)

• Rx ≧ H(X|Y) , Ry ≧ H(Y|X)

Source Coding Method

66

• Channel Coding – LDPC , Turbo Code

• Considering two similar binary sources ( X , Y ) – X: source, Y: side information – We use systematic channel code to generate parity bit

to protect X – Treat Y as the received signal with noise – Perform error-correction decoding

• The compression is achieved because only the parity bits of the error correction codes are sent to the decoder

Distributed Video Coding

67

• Wyner-Ziv & Slepian-Wolf Coding

– Slepian-Wolf Coding

• Channel coding (turbo code , LDPC)

• Encoder only transmits the parity bits to decoder

Distributed Video Coding

68

• Pixel-Domain Encoding

– Key frame

• Coded using conventional intra frame

– Wyner-Ziv frame

• Each pixel is uniform quantized with 2M intervals

• Intra frame coded but Inter frame decoded

– Splepian-Wolf coder

• Rate-Compatible Punctured Turbo code (RCPT)

• Request-and-decode process

Distributed Video Coding

69

• Pixel-Domain Encoding – Block diagram

Ref: B. Girod, A.M. Aaron, S. Rane, D. Rebollo-Monedero, "Distributed Video Coding," Proceedings of the IEEE , vol.93, no.1, pp.71-83, Jan. 2005

Distributed Video Coding

70

• Transform-Domain Encoding – Conventional coding

• Transform spatial data into spectral data – Ex: DCT , KLT , Wavelet ,etc

– Perform blockwise DCT to Wyner-Ziv frame • Decoder would get side information (spectral) from

previous frames

• A bank of turbo decoders reconstructed the qauntized coefficient bands

• Each coefficient band is reconstructed with the side information

Distributed Video Coding

71

• Transform-Domain Encoding

Highly-Cited DVC: DISCOVER Codec

72

Ref: X. Artigas, J. Ascenso, M. Dalai, S. Klomp, D. Kubasov, and M. Ouaret, "The DISCOVER codec: Architecture, techniques and evaluation," in Proc. Picture Coding Symposium (PCS’07), Nov. 2007.

XWZ

XDCT

X’F, X’B

Y

YDCT

P(X|Y) with Laplacian Distribution (α)

X’DCT

R

Xk

Unified DVC

73

Ref: Chieh-Chuan Chiu, Shao-Yi Chien, Chia-han Lee, V. Srinivasa Somayazulu, and Yen-Kuang Chen, "Distributed video coding: a promising solution for distributed wireless video sensors or not?" in Proc. Visual Communications and Image Processing 2011, Nov. 2011.

Analysis of Existing DVC Systems

• Analysis environment

– DISCOVER codec

• Improved frame interpolation with spatial motion smoothing

• Online correlation noise modeling

• LDPCA for syndrome coding

– Conditions

• Sequences: Foreman, Coastguard, and Hall Monitor

• Resolution: CIF at 30Hz

• Q tables from DISCOVER

• GOP is 2 for DVC, and GOP is 30 for H.264/AVC SP

74

Rate-Distortion Performance

• Most widely used DVC: DISCOVER (2010)

75

Foreman

Rate-Distortion Performance

• Most widely used DVC: DISCOVER (2010)

76

Hall Monitor

Rate-Distortion Performance

• Most widely used DVC: DISCOVER (2010)

77

Coastguard

Power Analysis Platforms

• Processor based platform

– SoCle MDK3D

– SoC with ARM11 1GHz processor

– X264 codec

– QCIF 15fps

• ASIC based platform

– 65nm technology

– Based on NTU H.264 encoder (ISSCC2005)

– VGA 30fps

78

Ref: S.-Y. Chien, T.-Y. Cheng, S.-H. Ou, C.-C. Chiu, C.-H. Lee, V. S. Somayazulu, and Y.-K. Chen, "Power Consumption Analysis for Distributed Video Sensors in Machine-to-Machine Networks," IEEE Journal on Emerging and Selected Topics in Circuits and Systems., vol. 3, no. 1, March 2013.

Power Analysis Results

• ASIC-based sensor node – Estimated with TinnoTek PowerMixer

100

37 36

7

0

20

40

60

80

100

120

H.264 H.264 No Motion H.264 Intra DVC

Po

we

r C

on

sum

pti

on

(%)

79

Power Analysis Results

• ASIC-based sensor node

80

0

20

40

60

80

100

120

H.264 Intra H.264 No Motion DISCOVER

Po

we

r (m

W)

Pc

Pt

Ps

106.89mW

95.33mW 97.09mW

Power Analysis Results

• Processor-based sensor node

81 0

0.5

1

1.5

2

2.5

3

H.264 Intra H.264 No Motion DISCOVER

Po

we

r (W

)

Pc

Pt

Ps

2.49W

2.17W

1.41W

Much Better Error Robustness

82

Ref: R. Puri, A. Majumdar, P.Ishwar, and K. Ramchandran, “Distributed Video Coding in Wireless Sensor Networks,” IEEE Signal Processing Magazine, July, 2006

Much Better Error Robustness

BD-rate in DVC BD-rate in AVC

PER=1% 7.8% 20.2%

PER=2% 16.4% 39.4%

PER=3% 20.2% 60.7%

29

30

31

32

33

34

35

36

37

38

39

0 500 1000 1500 2000

PSN

R

Kbits/Sec

DVCPER=0(anchor)

PER=1%

PER=2%

PER=3%

29

30

31

32

33

34

35

36

37

38

39

0 500 1000 1500

PSN

R

Kbits/Sec

H.264 (with AEC)PER=0(anchor)PER=1%PER=2%PER=3%

83

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding • State-of-the-art DVC systems

– Distributed video analysis

• Summary

84

State-of-the-Art DVC

85

State-of-the-Art DVC

86

State-of-the-Art DVC

87

State-of-the-Art DVC

•Exist in PRISM •Determine the coding mode •Generate hash code

88

State-of-the-Art DVC

•Turbo code or LDPC •Request more parity bits via the feedback channel when the error probability is high •Lead to long latency •Rate estimator •CRC is employed to make sure the correct decoding

89

State-of-the-Art DVC

•Frame interpolation •True motion estimation •Advanced motion estimation •Noise filter •Side information refinement •Multi-resolution motion refinement

90

State-of-the-Art DVC

•Laplacian distribution •On-line modeling, TRACE

91

State-of-the-Art DVC

•MMSE •MRF •Deblocking

92

IST’s DVC Work

Ref: R. Martins, C. Brites, J. Ascenso, and F. Pereira, “Refining Side Information for Improved Transform Domain Wyner–Ziv Video Coding,” IEEE TCSVT, vol. 19, no. 9. Sep. 2009.

93

IST’s DVC Work

Ref: R. Martins, C. Brites, J. Ascenso, and F. Pereira, "Statistical motion learning for improved transform domain Wyner–Ziv video coding," IET Image Processing, vol. 4, no. 1, 2010.

94

Features

• Statistical motion field (SMF) estimator

– Use motion field, the SSE value of each displacement in the initial SI frame

– SI re-estimation

• SI is re-estimated by averaging all candidates with SMF as weighting factors

95

Features

• Correlation noise distribution model

– Use mixture of Laplacian

96

R-D Performance

97

R-D Performance

98

IST’s Latest Work

Ref: C. Brites, J. Ascenso, F. Pereira, "Learning based Decoding Approach for Improved Wyner-Ziv Video Coding," in Proc. Picture Coding Symposium (PCS2012), May 2012.

99

Features

• Fractional-pixel motion field learning – Partially block updating

– Fractional-pixel motion estimation

– Motion field and side information updating with Φ candidate blocks

• CNM (correlation noise model) parameter learning – Update parameter α with different refined

residual frame

100

Experimental Results

Ref: C. Brites, J. Ascenso, F. Pereira, "Learning based Decoding Approach for Improved Wyner-Ziv Video Coding," in Proc. Picture Coding Symposium (PCS2012), May 2012.

101

NTU CMLab’s Work

Ref: Y.-C. Shen, P.-S. Wang, and J.-L. Wu, "Progressive Side Information Refinement with Non-Local Means Based Denoising Process for Wyner-Ziv Video Coding," Proc. Data Compression Conference, 2012. 102

R-D Performance

103

Intel-NTU Connected Context Computing Center’s Work

Channel Coding/Entropy Coding/Skip Mode

Residual Coding Side Information Refinement

Ref: C.-C. Chiu, S.-Y. Chien, C.-h. Lee, V. S. Somayazulu, and Y.-K. Chen, "Hybrid distributed video coding with frame level coding mode selection," in Proc. ICIP, 2012. 104

Key Features

• Residual Coding

– We only encode the difference between the

current frame and previous/future frame.

• Side Information Refinement

• Skip Mode

• Entropy Coding (CAVLC)

• Frame level coding mode selection

105

Side Information Refinement

• The better side information can reduce the amount of

parity bit sent from encoder.

• Block Selection for Refinement

– DC refinement

– AC refinement

106

Transform domain

Pixel domain

Y: Side Information R: Current reconstructed frame XF: Previous key frame XB: Next key frame

Side Information Refinement

• Candidate block searching – [-4,+4] block motion search range is applied

– Bidirectional ME (previous key frame and next key frame)

– candidate block filtering

– Candidate MV are stored in the list

107

Y: Side Information R: Current reconstructed frame XF: Previous key frame XB: Next key frame

Side Information Refinement

• Generate refined side information

– Update current side information and the alpha

values of those refined blocks

108

Y: Side Information R: Current reconstructed frame XF: Previous key frame XB: Next key frame

Skip Mode Decision

• It is widely used in the traditional video codec.

• The residual block would be skipped if the distortion of the residual block is lower than predefined threshold.

• Distortion

• Skip Mask

• Use run length coding to encode skip mask

109

Quantized Coefficient

Frame Level Coding Mode Selection

• Concept

– The entropy coding can lead to better RD-performance than

channel coding if the energy of residues is small.

– Channel coding outperforms entropy coding if the energy of

residues is large (entropy value becomes large)

• There are 4 coding modes

– Channel Coding: channel coding is used to code all bands.

– Hybrid Mode 1: channel coding is used to code the lower three

frequency bands, and the others are coded by CAVLC.

– Hybrid Mode 2: channel coding is used to code the lower six

frequency bands, and the others are coded by CAVLC.

– Entropy Coding: CAVLC is used to code all bands.

110

Frame Level Coding Mode Selection

• Average amplitude of each band

• Divide 16 bands into three groups and calculate

the energy of each group

111

Quantized Coefficient

G0

G1

G2

Usage Rate for Each Mode

112

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Foreman Coastguard Hall

Usa

ge R

ate

Channel CodingHybrid Mode 1Hybrid Mode 2Entropy Coding

Encoding Complexity

113

0

2

4

6

8

10

12

14

16

18

20

8 7 6 5 4 3 2 1

Tota

l En

cod

ing

Tim

e (

Sec)

Qi

Baseline(DISCOVER)

Hybrid DVC with CMS

H.264 No Motion

Foreman QCIF 150 Frames

Foreman

114 Foreman QCIF 150 Frames GOP 2

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

0 100 200 300 400 500

PSN

R (

dB

)

Bitrate (kbps)

DISCOVER

H.264 No Motion

H.264 Intra

Hybrid DVC

Hybrid DVC with CMS

Coastguard

115 Coastguard QCIF 150 Frames GOP 2

27

28

29

30

31

32

33

34

35

36

37

38

0 100 200 300 400 500 600

PSN

R (

dB

)

Bitrate (kbps)

DISCOVER

H.264 No Motion

H.264 Intra

Hybrid DVC

Hybrid DVC with CMS

Hall Monitor

116 Hall QCIF 150 Frames GOP 2

30

31

32

33

34

35

36

37

38

39

40

41

42

0 50 100 150 200 250 300 350 400 450

PSN

R (

dB

)

Bitrate (kbps)

DISCOVER

H.264 No Motion

H.264 Intra

Hybrid DVC

Hybrid DVC with CMS

Soccer

117 Soccer QCIF 150 Frames GOP 2

242526272829303132333435363738394041

0 100 200 300 400 500

PSN

R (

dB

)

Bitrate (kbps)

DISCOVER

H.264 No Motion

H264 Intra

Hybrid DVC

Hybrid DVC with CMS

Experimental Result

118 Hall QCIF 150 Frames GOP 2, 4, 8

29

30

31

32

33

34

35

36

37

38

39

40

41

42

0 50 100 150 200 250

PSN

R (

dB

)

Bitrate (kbps)

H.264 NoMotion - GOP2H.264 NoMotion - GOP4H.264 NoMotion - GOP8Hybrid DVC with CMS - GOP 2Hybrid DVC with CMS - GOP 4Hybrid DVC with CMS - GOP 8

Experimental Result

119 Hall QCIF 150 Frames GOP 2 4 8

30

31

32

33

34

35

36

37

38

39

40

41

42

0 50 100 150 200 250 300 350

PSN

R (

dB

)

Bitrate (kbps)

Hybrid DVC with CMS - GOP2Hybrid DVC with CMS - GOP4Hybrid DVC with CMS - GOP8DISCOVER - GOP2DISCOVER - GOP4DISCOVER - GOP8

6dB gain

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding • State-of-the-art DVC systems

– Distributed video analysis

• Summary

120

Content Analysis is the Key Component for Perpetual Video Cameras

• To get the expected result, we need to reduce the transmission power and sensing power

Pt: transmission power; Pc: coding power; Ps: sensing power

0

20

40

60

80

100

120

H.264 Intra H.264 NoMotion

DISCOVER Advanced DVC(Estimated)

Advanced DVC(Estimated

when GOP=8)

w/AnalysisEngine

(Estimated)

Po

we

r (m

W)

Pa

Pc

Pt

Ps

106.89mW

95.33mW 97.09mW 92.86mW

82.15mW

39.72mW

121

Video Content Analysis

• For distributed video sensor network, many sensors cover the same area

122

Video Content Analysis

• In most of time, there is no special event. We can save energy by

– Reducing the quality and data size of video

– Turning off the sensors

123

Video Content Analysis

• When a special event occurs, some sensors need to be turned on or turned to high-quality mode to record the important information

124

Video Analysis Algorithms

• Segmentation

• Tracking

• Face detection/scoring

125

126

Video Analysis Algorithms

2006 MXIC Contest Excellent Award

Semantic Levels

I(x,y,t)

Conventional Scalability

Two men walked in the office…

Objects in the Video

127

Reduce the Traffic Load with Semantic Levels

128

0

50000

100000

150000

200000

250000

300000

350000

1 84 167 250 333 416 499 582 665 748 831 914 997 1080 1163 1246 1329 1412 1495 1578Frame #

Bit

0

50000

100000

150000

200000

250000

1 83 165 247 329 411 493 575 657 739 821 903 985 1067 1149 1231 1313 1395 1477 1559

Frame #

Bit

PETS2001 Cemera2

Frame # 660

PETS2001 Cemera1

Frame # 1116

Camera 1

Camera 2

Ref: Shao-Yi Chien and Wei-Kai Chan (2011). Cooperative Visual Surveillance Network with Embedded Content Analysis Engine, Video Surveillance, Available from: http://www.intechopen.com/articles/show/title/cooperative-visual-surveillance-network-with-embedded-content-analysis-engine

Data rate with conventional video

surveillance

Data rate with event driven video surveillance

Video Analysis Algorithm for Single Camera

129

Multi-Background

Registration

Video Segmentation

Connected Component

Labeling

Feature Extraction

Merge/Split

Condition

Decision

Object Classification

Merged Object Update

No Longer Existent

Object Removal

Update Object Database

Automatic White

Balance

Skin Color

Detection

Face Candidate

Selection

Face Scoring &

Ranking

Source Video

Face Feature

Extraction

Ref: Shao-Yi Chien and Wei-Kai Chan (2011). Cooperative Visual Surveillance Network with Embedded Content Analysis Engine, Video Surveillance, Available from: http://www.intechopen.com/articles/show/title/cooperative-visual-surveillance-network-with-embedded-content-analysis-engine

Reconfigurable Smart Camera Processor Design

• Wei-Kai Chan, Yu-Hsiang Tseng, Pei-Kuei Tsung, Tzu-Der Chuang, Yi-Min Tsai, Wei-Yin Chen, Liang-Gee Chen, and Shao-Yi Chien, “ReSSP: A 5.877 TOPS/W Reconfigurable Smart Camera Stream Processor,” in Proc. Custom Integrated Circuits Conference (CICC), Sep. 2011.

130

Vision Processor Design Challenges

• ASIC or Processor-based Solution ?

– Performance vs Programmability

• Existing vision processors

1024 3168

4896

ISSCC 2008iVisual

ISSCC 2009KAIST

ISSCC 2010KAIST

On-Chip Memory(Kb)

Target at

128x128

image analysis

SIFT

640x480

SIFT

320x240

Large on-chip memory Large area cost Limited image analysis capability …………

131

Vision Processor Design Challenges

• What is the most appropriate hardware architecture?

• Design Challenges

1. Programmability

2. High performance

3. Heterogeneous data type

4. Low cost

5. Low power

132

Proposed Hardware Architecture and Design Techniques

• Tradeoff Between Flexibility and Performance

Selected solution

133

Proposed Hardware Architecture and Design Techniques

• Coarse-grained Reconfigurable Image Streaming Processing Architecture (CRISPA) – Reconfigurable architecture for hardware resource

sharing in different vision algorithms

– Algorithmic similarities Coarse-grained reconfigurable processing elements

– Data accessing similarities Improved memory accessing for vision processing

134

SIMD Array

P

E

P

E

P

E

P

E

P

E

P

E

P

E

P

EContoller

Program

Memory

Off-Chip

Memory

High Bandwidth Channel

On-Chip Memory Buffer

Conventional SIMD Array

Large PE array

Large On-Chip Memory

Large Bandwidth

135

Proposed CRISPA

............A B C D

Reconfigurable Interconnection

X

Reconfigurable Stage Processing Element (RSPE)

CR CR CR CR

...

SoC Bus

Input

I/F

RSPE

C

RSPE

D

Output

I/F

Context

Register

136

Proposed Hardware Architecture and Design Techniques

• Heterogeneous streaming processing(HSP) – Increase processing throughput

– Streams with different data types • Streaming interfaces

• Sub-word level parallelism(SLP) – Increase processing throughput

– Higher bus utilization

– Increase hardware sharing

137

64-bit System Bus

AHB Master/Slave

Input

Interface

Output

Interface

Inp

ut S

trea

m

Host Processor

Ou

tpu

t Stre

am

Reconfigurable Interconnections

Context

RegistersMain Controller

External

Memory

Reconfigurable Interconnections

Se

gm

en

tatio

n

Ob

ject In

fo

Pa

ze

n

Dis

tan

ce

Co

mp

uta

tion

SL

P W

ind

ow

Re

gis

ters

AL

U

CO

RD

IC

MA

C

Re

co

nfig

ura

ble

Me

mo

ry

Bin

ary

Mo

rph

olo

gy

Reconfigurable

Stream

Processing

Elements

(RSPEs)

ReSSP

Min

Ma

x

Overall Architecture of ReSSP

Specially designed RSPEs for smart camera Apps

Reconfigurable Interconnections (RI) 138

Operation Categorization For Smart Camera Vision Algorithms

Type A. Data Access

Type B. Sorting or Minimum-Maximum Finding

Type C. Multiply-and-Accumulate-based

Kernel Processing

Type D. Morphology Processing

Type E. Fundamental Math Function

Type F. Arithmetic and Logical Operations

Type G. Statistics Accumulation

Type H. Algorithm Specific or Functional Specific

139

RSPE Example: Memory RSPE

Reconfigurable

Memory

RSPE

0

Reconfigurable

Memory

RSPE

1

Reconfigurable

Memory

RSPE

7Delay line for 640x480 gray scale image (8 bits data per pixel)

Reconfigurable

Memory

RSPE

Word Number : 10

Word length :

64 bits(Two Ports)

140

RSPE Example: MAC

Input Selector

MUL

0

Adder Tree

0

Adder Tree

1

Adder Tree

2

Adder Tree

3

From Context Registers, Reconfigurable Memory Array, or

Other RSPEs

MUL

1

MUL

2

MUL

3

MUL

4

MUL

5

MUL

6

MUL

7

MUL

8

141

Morphology RSPE

16 RSPEs to support 16 operation level parallelism

Connected one by one with RI

64 SLP in each RSPE

Using RM RSPE

142

64-bit System Bus

AHB Master/Slave

Input

Interface

Output

Interface

Inp

ut S

trea

m

Host Processor

Ou

tpu

t Stre

am

Reconfigurable Interconnections

Context

RegistersMain Controller

External

Memory

Reconfigurable Interconnections

Se

gm

en

tatio

n

Ob

ject In

fo

Pa

ze

n

Dis

tan

ce

Co

mp

uta

tion

SL

P W

ind

ow

Re

gis

ters

AL

U

CO

RD

IC

MA

C

Re

co

nfig

ura

ble

Me

mo

ry

Bin

ary

Mo

rph

olo

gy

Reconfigurable

Stream

Processing

Elements

(RSPEs)

ReSSP

Min

Ma

x

Time Frame 1: Video Object Segmentation

Unused RSPEs are Clock Gated 143

64-bit System Bus

AHB Master/Slave

Input

Interface

Output

Interface

Inp

ut S

trea

m

Host Processor

Ou

tpu

t Stre

am

Reconfigurable Interconnections

Context

RegistersMain Controller

External

Memory

Reconfigurable Interconnections

Se

gm

en

tatio

n

Ob

ject In

fo

Pa

ze

n

Dis

tan

ce

Co

mp

uta

tion

SL

P W

ind

ow

Re

gis

ters

AL

U

CO

RD

IC

MA

C

Re

co

nfig

ura

ble

Me

mo

ry

Bin

ary

Mo

rph

olo

gy

Reconfigurable

Stream

Processing

Elements

(RSPEs)

ReSSP

Min

Ma

x

Time Frame 5 : Particle Filter

Unused RSPEs are Clock Gated 144

Processing Capability of ReSSP

Low Level Operation Frame Rate @ 142 MHz

7x7 Gaussian Filter 443 1920x1080 fps

Morphological Operation 56722 1920x1080fps

Histogram Accumulation 37488 80x80 image blocks

per second

145

Processing Capability of ReSSP

Smart Camera

Application

Specification

Video Object

Segmentation and

Tracking

Segmentation : 640x480

125fps

Segmentation + tracking :

640x480 30fps 11 Objects

Face Detection,

Scoring and

Ranking

150 faces per second

Object Detection

and

Recognition(SIFT)

can support 1920x1080 full

HD object recognition in

real-time 146

One Example Video Analysis Engine

Work ReSSP

Process TSMC 90nm 1P9M CMOS

Die Size 10.4mm2 (3.2mmx3.2mm)

Power Supply Core 1.2V, I/O 2.5V

Total Gate Count

0.9M Gates (2-Input NAND

Gate, Including On-Chip

Memory)

On-Chip Memory (Kb) 56 (Including Context

Registers)

Working Frequency Max 149MHz

Peak Performance (GOPS) 1157.82

Power Consumption 197mW (peak)

Area Efficiency

(GOPS/mm^2)

111.329

Power Efficiency (TOPS/W) 5.877

Resolution and Spec for

Image Analysis

SIFT 640x480 30fps

and other applications with

high spec

147

Ref: W.-K. Chan et al., “ReSSP: a 5.877 TOPS/W reconfigurable smart-camera stream processor,” in Proc. CICC, Sept. 2011.

Cooperative Surveillance System

148

VideoCommand

Wi-Fi

Connection

Fixed Cameras

Control Center

Robot

(Mobile Camera)

Ref: Chih-Chun Chia, Wei-Kai Chan, and Shao-Yi Chien,

“Cooperative surveillance system with fixed camera object

localization and mobile robot target tracking,” in Proc.

Pacific Rim Symposium on Advances in Image and Video

Technology (PSIVT 2009), pp. 886 - 897, Tokyo, Japan, Jan.

2009.

Cooperative Surveillance System

149

ZigBee Localization

Fixed Camera

Object Segmentation

Camera-to-Map

Homography

Intruder Detection

Target FindingTarget Tracking

Vision Localization

Fixed Cameras and Control Center for Target Detection and Localization

Mobile Robot for Tracking

Cooperative Surveillance System

150

Cooperative Surveillance System

151

Cooperative Surveillance System

152

Video Summary

153

19 Cameras Resolution: 480x300 30fps Length: 435s

Video Summarization

• Video summarization or video abstract

– Generate a short representation of the original video for quick indexing and browsing

155 https://mutaverse.com/

Video Summary over Distributed Video Sensors

• Video summary on

multi-view video?

• Conventional approach: collecting all the videos in a server and clustering the videos centralized approach

• Distributed approach?

156

157

Importance Estimation

(Foreground Object)

Feature Extraction

(Color Histogram)

Redundancy Estimation

Encoding

Ignore Ignore

- Rearrange frame into shot - Filter out short shot

View #1

View #2

View #3

View #n

Summary

Video Summary over Distributed Video Sensors

Summarization Result

• Distributed video summary drastically reduces data size with local analysis engine and information exchange between sensor nodes

• Has the potential to reduce sensor power consumption

159

0 5000 10000 15000 20000 25000

Summarized Video

Sent Data (Inter-view)

Sent Data (Intra-view)

Total Video

Bandwidth (Kbps)

Video Data

Feature Data

100%

14.5%

8.7%

Outline

• Internet of Things (or Machine-to-Machine) – Introduction and overview

– Technical challenges

• Video sensor systems – Role and requirements of video cameras in IoT

– Power analysis of wireless video sensors

– Distributed video coding • State-of-the-art DVC systems

– Distributed video analysis

• Summary

162

Summary

• IoT will shape the way we live, play, work

• Ultra-low-power wireless video sensor systems will play a critical role

• Many challenges & opportunities – Context-aware distributed video coding

• Video coding complexity vs. wireless transmission cost

– Application-adaptive distributed video analysis • Flexibility vs. efficiency

New research opportunities

163

Pipeline of Distributed Data Sensing and Analysis

Video Content

Analysis

Conventional

Video Coding

Wyner-Ziv Video

Coding

Video Content

Analysis

Multi-Channel

Conventional

Video

Decoding

Multi-Channel

Wyner-Ziv Video

DecodingMulti-Channel

Video Content

AnalysisConventional

Video Coding

Wyner-Ziv Video

Coding

Conventional

Video Decoding

Wyner-Ziv Video

Decoding

Large-Scale

Video Content

Analysis

Sensor node Aggregator node Cloud

Semantic

Level

Data from

Each Camera Large Small

High Low

Data Filtering Process

Context Inferring Process

Acknowledgement

• Intel-NTU Connected Context Computing Center (http://ccc.ntu.edu.tw) – Prof. Liang-Gee Chen – Dr. Chia-han Lee and Dr. V. Srinivasa Somayazulu – Team members: Teng-Yuan Cheng, Hsing-Min Chen, Pei-Kuei

Tsung, Chieh-Chuan Chiu, Hsin-Fang Wu, Shun-Hsing Ou, Yu-Chun Wang, Cheng-Yen Su, Yueh-Ying Lee, and Chester Liu

– Members of Media IC and System Lab and DSP/IC Design Lab

• National Science Council – NSC 99-2911-I-002-201; NSC 100-2911-I-002-001

• National Taiwan University – 99R70600; 10R70500; 10R70501; 101R7501

165

References

• Overview – G. Lawton, "Machine-to-machine technology gears up for

growth," IEEE Computer, vol.37, no.9, pp. 12- 15, Sept. 2004. – M. Starsinic, “System architecture challenges in the home M2M

network,” LISAT 2010, May 2010. – Y.-C. Lu and S.-I. Hu, “Considerations in technology and policy planning

for Machine-to-Machine (M2M) networks,” IEEE Proceedings of International Conference on Information Management and Engineering, pp. 448-451, May 2011

– Y.-K. Chen, "Challenges and Opportunities of Internet of Things," in Asia and South Pacific Design Automation Conference, Feb 2012.

– J. Zhang, et al., “Mobile Cellular Networks and Wireless Sensor Networks: Toward Convergence,” IEEE Communications Magazine, 50, 3, 164-169, 2012

– IEEE Internet Computing, “Internet of Things” Track

166

References • Distributed video coding and analysis

– B. Girod, A. M. Aaron, S. Rane, and D. Rebollo-Monedero, "Distributed Video Coding," Proceedings of the IEEE, vol. 93, no. 1, Jan. 2005.

– X. Artigas, J. Ascenso, M. Dalai, S. Klomp, D. Kubasov, and M. Ouaret, "The DISCOVER codec: Architecture, techniques and evaluation," in Proc. Picture Coding Symposium (PCS’07), Nov. 2007.

– C.-C. Chiu, S.-Y. Chien, C.-h. Lee, V. S. Somayazulu, and Y.-K. Chen, "Distributed video coding: a promising solution for distributed wireless video sensors or not?" in Proc. Visual Communications and Image Processing 2011, Nov. 2011.

– R. Puri, A. Majumdar, P.Ishwar, and K. Ramchandran, “Distributed Video Coding in Wireless Sensor Networks,” IEEE Signal Processing Magazine, July, 2006

– S.-Y. Chien, T.-Y. Cheng, S.-H. Ou, C.-C. Chiu, C.-H. Lee, V. S. Somayazulu, and Y.-K. Chen, "Power Consumption Analysis for Distributed Video Sensors in Machine-to-Machine Networks," IEEE Journal on Emerging and Selected Topics in Circuits and Systems., vol. 3, no. 1, March 2013.

– R. Martins, C. Brites, J. Ascenso, and F. Pereira, "Statistical motion learning for improved transform domain Wyner–Ziv video coding," IET Image Processing, vol. 4, no. 1, 2010.

– Y.-C. Shen, P.-S. Wang, and J.-L. Wu, "Progressive Side Information Refinement with Non-Local Means Based Denoising Process for Wyner-Ziv Video Coding," Proc. Data Compression Conference, 2012.

– C.-C. Chiu, S.-Y. Chien, C.-h. Lee, V. S. Somayazulu, and Y.-K. Chen, "Hybrid distributed video coding with frame level coding mode selection," in Proc. ICIP, 2012.

– S.-Y. Chien and W.-K. Chan, "Cooperative Visual Surveillance Network with Embedded Content Analysis Engine," Video Surveillance, Available from: http://www.intechopen.com/articles/show/title/cooperative-visual-surveillance-network-with-embedded-content-analysis-engine

– W.-K. Chan, J.-Y. Chang, T.-W. Chen, Y.-H. Tseng, and S.-Y. Chien, "Efficient Content Analysis Engine for Visual Surveillance Network," IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 5, 2009.

– W.-K. Chan, Y.-H. Tseng, P.-K. Tsung, T.-D. Chuang, Y.-M. Tsai, W.-Y. Chen, L.-G. Chen, and S.-Y. Chien, "ReSSP: a 5.877 TOPS/W reconfigurable smart-camera stream processor," in Proc. CICC, Sept. 2011.

– C.-C. Chia, W.-K. Chan, and S.-Y. Chien, "Cooperative surveillance system with fixed camera object localization and mobile robot target tracking," in Proc. Pacific Rim Symposium on Advances in Image and Video Technology (PSIVT 2009), pp. 886 - 897, Tokyo, Japan, Jan. 2009.

167

References • Sensing Technologies

– B. A. Warneke and K.S.J. Pister, “An Ultra-Low-Energy Microcontroller for Smart Dust Wireless Sensor Networks,” Solid States Circuits Conf. 2004 ISSCC 2004

– C.-K. Wu, et al., "A 80kS/s 36mW Resistor-based Temperature Sensor using BGR-free SAR ADC with a Unevenly-weighted Resistor String in 0.18μm CMOS," IEEE Symposium on VLSI Circuits, pp. 222-223, June 2011

– C.-C. Chiu, et al., "Distributed video coding: a promising solution for distributed wireless video sensors or not?" Proc. Visual Communications and Image Processing 2011, Nov. 2011

– S.-Y. Chien, et al., " Power Optimization of Wireless Video Sensor Nodes in M2M Networks" Proc. 17th Asia and South Pacific Design Automation Conference (ASP-DAC 2012), Jan. 2012

– M.-S. Liao, et al., “A Novel Remote Agroecological Monitoring Systems with Autonomous Event Detection,” IEEE Transactions on Mechatronics. 2012 168

References

• Energy harvesting – S. Roundy and P.K. Wright, “A piezoelectric vibration based

generator for wireless electronics,” Smart Materials and Structures, 13, 5, 1131, 2004.

– A. Badel, et al., ‘‘Efficiency Enhancement of a Piezoelectric Energy Harvesting Device in Pulsed Operation by Synchronous Charge Inversion ,’’ Journal of Intelligent Material Systems and Structures, 16, 889-901 2005.

– E. Lefeuvre, et al., “A comparison between several vibration-powered piezoelectric generators for standalone systems,” Sensors and Actuators A, 126, 405-416, 2006

– K. J. Lkim, et al., “Energy scavenging for energy efficiency in networks and applications,” Bell Labs Technical Journal, 15, 2, 7-20, 2010

169

References

• Self-configuration, self-optimization, self-healing, and self-protection – F. Wang and F.-Z. Li, “The design of an autonomic

computing model and the algorithm for decision-making,” IEEE Granular Computing, 2005

– S. Karnouskos and M.M.J. Tariq, “An agent-based simulation of SOA-ready devices,” Computer Modeling and Simulation, 2008

– H.-L. Fu et al., “Energy-Efficient Reporting Mechanisms for Multi-Type Real-time Monitoring in Machine-to-Machine Communications Networks,“ IEEE INFOCOM 2012 Main Conference, Orlando, Florida, 2012

170

References • Communication

– K. Chang, et al., "Global Wireless Machine-to-Machine Standardization," IEEE Internet Computing, vol.15, no.2, pp.64-69, March-April 2011

– S.Y. Lien and K.C. Chen, “Massive Access Management for QoS Guarantees in 3GPP Machine-to-Machine Communications”, IEEE Communications Letters, vol. 15, no. 3, pp. 311-313, 2011

– C.-F., Liao, et al., “Toward Reliable Service Management in Message-Oriented Pervasive Systems,” IEEE Transactions on Services Computing, vol. 4, no. 3, pp. 183-195, 2011

– IEEE Wireless Communications, special issue on "The

Internet of things," December 2010. – IEEE Communications Magazine, special issue on "Recent

progress in machine-to-machine communications," April 2011

171

References • Data analysis

– M. Balazinska, et al., "Data Management in the Worldwide Sensor Web," IEEE Pervasive Computing, vol.6, no.2, pp.30-40, April-June 2007

– H.-M Lin, et al., “iShare: An Ad-Hoc Sharing System for Internet Connectivity”, Human-Centric Communications and Networking Workshop in conjunction with International Wireless Communications and Mobile Computing conference, 2011

– C.-C. Chou, et al., “Characterizing Indoor Environment for Robot Navigation Using Velocity-Space Approach with Region Analysis and Look-Ahead Verification.” The IEEE Transactions on Instrumentation and Measurement, Vol. 60, No. 2, 2011, pp. 442-451

– Y.-C. Yen, et al., “Evidence-based and Context-Aware Eldercare Using Persuasive Engagement Policy,” Proc. of International Conference on Human-Computer Interaction, pp.240-246, 2011

– C.-H., Lu, et al., “A Reciprocal and Extensible Architecture for Multiple-Target Tracking in a Smart Home, ” IEEE Transactions on Systems, Man, and Cybernetics -Part C, Vol.41, No.1, pp.120-129, 2011

172