java networking,ieee 2013 projects,m.tech 2013 projects,final year engineering projects,best student...

Post on 20-Jan-2015

642 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

CITL Tech Varsity, a leading institute for assisting academicians M.Tech / MS/ B.Tech / BE (EC, EEE, ETC, CS, IS, DCN, Power Electronics, Communication)/ MCA and BCA students in various Domains & Technologies from past several years. DOMAINS WE ASSIST HARDWARE: Embedded, Robotics, Quadcopter (Flying Robot), Biomedical, Biometric, Automotive, VLSI, Wireless (GSM,GPS, GPRS, RFID, Bluetooth, Zigbee), Embedded Android. SOFTWARE Cloud Computing, Mobile Computing, Wireless Sensor Network, Network Security, Networking, Wireless Network, Data Mining, Web mining, Data Engineering, Cyber Crime, Android for application development. SIMULATION: Image Processing, Power Electronics, Power Systems, Communication, Biomedical, Geo Science & Remote Sensing, Digital Signal processing, Vanets, Wireless Sensor network, Mobile ad-hoc networks TECHNOLOGIES WE WORK: Embedded (8051, PIC, ARM7, ARM9, Embd C), VLSI (Verilog, VHDL, Xilinx), Embedded Android JAVA / J2EE, XML, PHP, SOA, Dotnet, Java Android. Matlab and NS2 TRAINING METHODOLOGY 1. Train you on the technology as per the project requirement 2. IEEE paper explanation, Flow of the project, System Design. 3. Algorithm implementation & Explanation. 4. Project Execution & Demo. 5. Provide Documentation & Presentation of the project.

TRANSCRIPT

NetworkingNO PRJ

TITLEABSTRACT DOMAIN YOP

1 A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop wireless Networks With Order-Optimal Per-Flow Delay

Quantifying the end-to-end delay performance in multihop wireless networks is a well-known challenging problem. In this paper, we propose a new joint congestion control and scheduling algorithm for multihop wireless networks with fixed-route flows operated under a general interference model with interference degree K. Our proposed algorithm not only achieves a provable throughput guarantee (which is close to at least 1/K of the system capacity region), but also leads to explicit upper bounds on the end-to-end delay of every flow. Our end-to-end delay and throughput bounds are in simple and closed forms, and they explicitly quantify the tradeoff between throughput and delay of every flow. Furthermore, the per-flow end-to-end delay bound increases linearly with the number of hops that the flow passes through, which is order-optimal with respect to the number of hops. Unlike traditional solutions based on the back-pressure algorithm, our proposed algorithm combines window-based flow control with a new rate-based distributed scheduling algorithm. A key contribution of our work is to use a novel stochastic dominance approach to bound the corresponding per-flow throughput and delay, which otherwise are often intractable in these types of systems. Our sproposed algorithm is fully distributed and requires a low per-node complexity that does not increase with the network size. Hence, it can be easily implemented in practice.

Networking domain

2013

2 ICTCP: Incast Congestion Congestion Control for TCP in Data-Center Networks

Transport Control Protocol (TCP) incast congestion happens in high-bandwidth and low-latency networks when multiple synchronized servers send data to the same receiver in parallel. For many important data-center applications such as MapReduce and Search, this many-to-one traffic pattern is common. Hence TCP incast congestion may severely degrade their performances, e.g., by increasing response time. In this paper, we study TCP incast in detail by focusing on the relationships between TCP throughput, round-trip time (RTT), and receive window. Unlike previous approaches, which mitigate the impact of TCP incast congestion by using a fine-grained timeout value, our idea is to design an Incast congestion Control for TCP (ICTCP) scheme on the receiver side. In particular, our method adjusts the TCP receive window proactively before packet loss occurs. The implementation and experiments in our testbed demonstrate that we achieve almost zero timeouts and high goodput for TCP incast.

Networking domain

2013

3 An Efficient and Robust Addressing Protocol for Node Autoconfiguration in Ad Hoc Networks

Address assignment is a key challenge in ad hoc networks due to the lack of infrastructure. Autonomous addressing protocols require a distributed and self-managed mechanism to avoid address collisions in a dynamic network with fading channels, frequent partitions, and joining/leaving nodes. We propose and analyze a lightweight protocol that configures mobile ad hoc nodes based on a distributed address database stored in filters that reduces the control load and makes the proposal robust to packet losses and network partitions. We evaluate the performance of our protocol, considering joining nodes, partition merging events, and network initialization. Simulation results show that our protocol resolves all the address collisions and also reduces the control traffic when compared to previously proposed protocols.

Networking domain

2013

4 NICE: Cloud security is one of most important issues that has attracted a lot of research and development effort in past Networking 2013

#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank,Vijaynagar,Bangalore-560040.

Website: www.citlprojects.com, Email ID: projects@citlindia.com,hr@citlindia.comMOB: 9886173099 / 9986709224, PH : 080 -23208045 / 23207367

JAVA / J2EE PROJECTS – 2013(Networking, Network-Security, Mobile Computing, Cloud Computing, Wireless Sensor Network, Datamining,

Webmining, Artificial Intelligence, Vanet, Ad-Hoc Network)

Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). DDoS attacks usually involve early stage actions such as multistep exploitation, low-frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multiphase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph-based analytical models and reconfigurable virtual network-based countermeasures. The proposed framework leverages OpenFlow network programming APIs to build a monitor and control plane over distributed programmable virtual switches to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution.

domain

5 Revealing Density-Based Clustering Structure from the Core-Connected Tree of a Network

Clustering is an important technique for mining the intrinsic community structures in networks. The density-based network clustering method is able to not only detect communities of arbitrary size and shape, but also identify hubs and outliers. However, it requires manual parameter specification to define clusters, and is sensitive to the parameter of density threshold which is difficult to determine. Furthermore, many real-world networks exhibit a hierarchical structure with communities embedded within other communities. Therefore, the clustering result of a global parameter setting cannot always describe the intrinsic clustering structure accurately. In this paper, we introduce a novel density-based network clustering method, called graph-skeleton-based clustering (gSkeletonClu). By projecting an undirected network to its core-connected maximal spanning tree, the clustering problem can be converted to detect core connectivity components on the tree. The density-based clustering of a specific parameter setting and the hierarchical clustering structure both can be efficiently extracted from the tree. Moreover, it provides a convenient way to automatically select the parameter and to achieve the meaningful cluster tree in a network. Extensive experiments on both real-world and synthetic networks demonstrate the superior performance of gSkeletonClu for effective and efficient density-based clustering.

Networking domain

2013

6 Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm

The problem of efficiently and securely broadcasting to a remote cooperative group occurs in many newly emerging networks. A major challenge in devising such systems is to overcome the obstacles of the potentially limited communication from the group to the sender, the unavailability of a fully trusted key generation center, and the dynamics of the sender. The existing key management paradigms cannot deal with these challenges effectively. In this paper, we circumvent these obstacles and close this gap by proposing a novel key management paradigm. The new paradigm is a hybrid of traditional broadcast encryption and group key agreement. In such a system, each member maintains a single public/secret key pair. Upon seeing the public keys of the members, a remote sender can securely broadcast to any intended subgroup chosen in an ad hoc way. Following this model, we instantiate a scheme that is proven secure in the standard model. Even if all the nonintended members collude, they cannot extract any useful information from the transmitted messages. After the public group encryption key is extracted, both the computation overhead and the communication cost are independent of the group size. Furthermore, our scheme facilitates simple yet efficient member deletion/addition and flexible rekeying strategies. Its strong security against collusion, its constant overhead, and its implementation friendliness without relying on a fully trusted authority render our protocol a very promising solution to many applications.

Networking domain

2013

7 Peer-Assisted Social Media Streaming with Social Reciprocity

Online video sharing and social networking are cross-pollinating rapidly in today's Internet: Online social network users are sharing more and more media contents among each other, while online video sharing sites are leveraging social connections among users to promote their videos. An intriguing development as it is, the operational challenge in previous video sharing systems persists, em i.e., the large server cost demanded for scaling of the systems. Peer-to-peer video sharing could be a rescue, only if the video viewers' mutual resource contribution has been fully incentivized and efficiently scheduled. Exploring the unique advantages of a social network based video sharing system, we advocate to utilize social reciprocities among peers with social relationships for efficient contribution incentivization and scheduling, so as to enable high-quality video streaming with low server cost. We exploit social reciprocity with two give-and-take ratios at each peer: (1) peer contribution ratio (em PCR), which evaluates the reciprocity level between a pair of social friends, and (2) system contribution ratio (em SCR), which records the give-and-take level of the user to and from the entire system. We design efficient peer-to-peer mechanisms for video streaming using the two ratios, where each user optimally decides which other users to seek relay help from and help in relaying video streams, respectively, based on combined evaluations of their social relationship and historical reciprocity levels. Our design achieves effective incentives for resource contribution, load balancing among relay peers, as well as efficient social-aware resource scheduling. We also discuss practical implementation and implement our design in a prototype social media sharing system. Our extensive evaluations based on PlanetLab experiments verify that high-quality large-scale social media sharing can be achieved with conservative server costs.

Networking domain

2013

8 Efficient Storage and Processing of High –Volume Network Monitori

Monitoring modern networks involves storing and transferring huge amounts of data. To cope with this problem, in this paper we propose a technique that allows to transform the measurement data in a representation format meeting two main objectives at the same time. Firstly, it allows to perform a number of operations directly on the transformed data with a controlled loss of accuracy, thanks to the mathematical framework it is based on. Secondly, the new representation has a small memory footprint, allowing to reduce the space needed for data storage and the time needed for data transfer. To validate our technique, we perform an analysis of its performance in terms of accuracy and memory footprint. The results show that the transformed data closely approximates the original data (within 5% relative error) while achieving a compression ratio of 20%; storage footprint can also be gradually reduced towards the one of the state-of-the-art compression tools, such as bzip2, if higher approximation

Networking domain

2013

ng Data is allowed. Finally, a sensibility analysis show that technique allows to trade-off the accuracy on different input fields so to accommodate for specific application needs, while a scalability analysis indicates that the technique scales with input size spanning up to three orders of magnitude.

9 A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks With Order-Optimal Per-Flow Delay

Quantifying the end-to-end delay performance in multihop wireless networks is a well-known challenging problem. In this paper, we propose a new joint congestion control and scheduling algorithm for multihop wireless networks with fixed-route flows operated under a general interference model with interference degree K. Our proposed algorithm not only achieves a provable throughput guarantee (which is close to at least 1/K of the system capacity region), but also leads to explicit upper bounds on the end-to-end delay of every flow. Our end-to-end delay and throughput bounds are in simple and closed forms, and they explicitly quantify the tradeoff between throughput and delay of every flow. Furthermore, the per-flow end-to-end delay bound increases linearly with the number of hops that the flow passes through, which is order-optimal with respect to the number of hops. Unlike traditional solutions based on the back-pressure algorithm, our proposed algorithm combines window-based flow control with a new rate-based distributed scheduling algorithm. A key contribution of our work is to use a novel stochastic dominance approach to bound the corresponding per-flow throughput and delay, which otherwise are often intractable in these types of systems. Our proposed algorithm is fully distributed and requires a low per-node complexity that does not increase with the network size. Hence, it can be easily implemented in practice.

Networking domain

2013

10 Optimal Content Placement for Peer-to-Peer Video-on-Demand Systems

In this paper, we address the problem of content placement in peer-to-peer (P2P) systems, with the objective of maximizing the utilization of peers' uplink bandwidth resources. We consider system performance under a many-user asymptotic. We distinguish two scenarios, namely “Distributed Server Networks” (DSNs) for which requests are exogenous to the system, and “Pure P2P Networks” (PP2PNs) for which requests emanate from the peers themselves. For both scenarios, we consider a loss network model of performance and determine asymptotically optimal content placement strategies in the case of a limited content catalog. We then turn to an alternative “large catalog” scaling where the catalog size scales with the peer population. Under this scaling, we establish that storage space per peer must necessarily grow unboundedly if bandwidth utilization is to be maximized. Relating the system performance to properties of a specific random graph model, we then identify a content placement strategy and a request acceptance policy that jointly maximize bandwidth utilization, provided storage space per peer grows unboundedly, although arbitrarily slowly, with system size.

Networking domain

2013

11 Throughput-Optimal Scheduling in Multihop Wireless Networks Without Per-Flow Information

In this paper, we consider the problem of link scheduling in multihop wireless networks under general interference constraints. Our goal is to design scheduling schemes that do not use per-flow or per-destination information, maintain a single data queue for each link, and exploit only local information, while guaranteeing throughput optimality. Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information. It is usually difficult to obtain and maintain this type of information, especially in large networks, where there are numerous flows. Also, the back-pressure algorithm maintains a complex data structure at each node, keeps exchanging queue-length information among neighboring nodes, and commonly results in poor delay performance. In this paper, we propose scheduling schemes that can circumvent these drawbacks and guarantee throughput optimality. These schemes use either the readily available hop-count information or only the local information for each link. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal. We also conduct simulations to validate our theoretical results in various settings and show that the proposed schemes can substantially improve the delay performance in most scenarios

networking domain

2013

12 Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks

Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing-coding tradeoff.

networking domain

2013

13 LDTS: A Lightweight and Dependable Trust System for Clustered Wireless Sensor Networks

The resource efficiency and dependability of a trust system are the most fundamental requirements for any wireless sensor network (WSN). However, existing trust systems developed for WSNs are incapable of satisfying these requirements because of their high overhead and low dependability. In this work, we proposed a lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms. First, a lightweight trust decision-making scheme is proposed based on the nodes' identities (roles) in the clustered WSNs, which is suitable for such WSNs because it facilitates energy-saving. Due to canceling feedback between cluster members (CMs) or between cluster heads (CHs), this approach can significantly improve system efficiency while reducing the effect of malicious nodes. More importantly, considering that CHs take on large amounts of data forwarding and communication tasks, a dependability-enhanced trust evaluating approach is defined for cooperations between CHs. This approach can effectively reduce networking consumption while malicious, selfish, and faulty CHs. Moreover, a self-adaptive weighted method is defined for trust aggregation at CH level. This approach surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Theory as well as simulation results shows that LDTS demands less memory and communication overhead compared with the current typical trust systems for WSNs.

Networking domain

2013

14 1. HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable Access Control in Cloud Computing

Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.

Networking domain

2013

15 CAM: Cloud-Assisted Privacy Preserving Mobile Health Monitoring

Cloud-assisted mobile health (mHealth) monitoring, which applies the prevailing mobile communications and cloud computing technologies to provide feedback decision support, has been considered as a revolutionary approach to improving the quality of healthcare service while lowering the healthcare cost. Unfortunately, it also poses a serious risk on both clients' privacy and intellectual property of monitoring service providers, which could deter the wide adoption of mHealth technology. This paper is to address this important problem and design a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of the involved parties and their data. Moreover, the outsourcing decryption technique and a newly proposed key private proxy reencryption are adapted to shift the computational complexity of the involved parties to the cloud without compromising clients' privacy and service providers' intellectual property. Finally, our security and performance analysis demonstrates the effectiveness of our proposed design.

Networking domain

2013

16 Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler

Research into developing effective computer aided techniques for planning software projects is important and challenging for software engineering. Different from projects in other fields, software projects are people-intensive activities and their related resources are mainly human resources. Thus, an adequate model for software project planning has to deal with not only the problem of project task scheduling but also the problem of human resource allocation. But as both of these two problems are difficult, existing models either suffer from a very large search space or have to restrict the flexibility of human resource allocation to simplify the model. To develop a flexible and effective model for software project planning, this paper develops a novel approach with an event-based scheduler (EBS) and an ant colony optimization (ACO) algorithm. The proposed approach represents a plan by a task list and a planned employee allocation matrix. In this way, both the issues of task scheduling and employee allocation can be taken into account. In the EBS, the beginning time of the project, the time when resources are released from finished tasks, and the time when employees join or leave the project are regarded as events. The basic idea of the EBS is to adjust the allocation of employees at events and keep the allocation unchanged at nonevents. With this strategy, the proposed method enables the modeling of resource conflict and task preemption and preserves the flexibility in human resource allocation. To solve the planning problem, an ACO algorithm is further designed. Experimental results on 83 instances demonstrate that the proposed method is very promising.

Networking domain

2013

17 1.   A Multiagent Modeling and Investigation of Smart Homes With Power Generation, Storage, and Trading Features --grid computing

Smart homes, as active participants in a smart grid, may no longer be modeled by passive load curves; because their interactive communication and bidirectional power flow within the smart grid affects demand, generation, and electricity rates. To consider such dynamic environmental properties, we use a multiagent-system-based approach in which individual homes are autonomous agents making rational decisions to buy, sell, or store electricity based on their present and expected future amount of load, generation, and storage, accounting for the benefits each decision can offer. In the proposed scheme, home agents prioritize their decisions based on the expected utilities they provide. Smart homes' intention to minimize their electricity bills is in line with the grid's aim to flatten the total demand curve. With a set of case studies and sensitivity analyses, we show how the overall performance of the home agents converges-as an emergent behavior-to an equilibrium benefiting both the entities in different operational conditions and determines the situations in which conventional homes would benefit from purchasing their own local generation-storage systems.

Networking domain

2013

18   AMPLE: An Adaptive Traffic Engineering System Based on Virtual Routing Topologies

Handling traffic dynamics in order to avoid network congestion and subsequent service disruptions is one of the key tasks performed by contemporary network management systems. Given the simple but rigid routing and forwarding functionalities in IP base environments, efficient resource management and control solutions against dynamic traffic conditions is still yet to be obtained. In this article, we introduce AMPLE — an efficient traffic engineering and routing topologies for long term operation through the optimized setting of link weights. Based on these diverse paths, adaptive traffic control performs intelligent traffic splitting across individual routing topologies in reaction to the monitored network dynamics at short timescale. According to our evaluation with real network topologies and traffic traces, the proposed system is able to cope almost optimally with unpredicted traffic dynamics and, as such, it constitutes a new proposal for achieving better quality of service and overall network performance in IP networks. Management system that performs adaptive traffic control by using multiple virtualized routing topologies. The proposed system consists of two complementary components: offline link weight optimization that takes as input the physical network topology and tries to produce maximum routing path diversity across multiple virtual

NETWORKING

2012

19    Computing localized power efficient data aggregation in Trees for sensor network

We propose localized, self organizing, robust, and energy-efficient data aggregation tree approaches for sensor networks, which we call Localized Power-Efficient Data Aggregation Protocols (L-PEDAPs). They are based on topologies, such as LMST and RNG, that can approximate minimum spanning tree and can be efficiently computed using only position or distance information of one-hop neighbors. The actual routing tree is constructed over these topologies. We also consider different parent selection strategies while constructing a routing tree. We compare each topology and parent selection strategy and conclude that the best among them is the shortest path strategy over LMST structure. Our solution also involves route Maintenance procedures that will be executed when a sensor node fails or a new node is added to the network. The proposed solution is also adapted to consider the remaining power levels of nodes in order to increase the network lifetime. Our simulation results show that by using our power-aware localized approach, we can almost have the same performance of a centralized solution in terms of network lifetime, and close to 90 percent of an upper bound derived here.

NETWORKING

2012

20 Improving Energy Saving and Reliabilit

This paper deals with a novel forwarding scheme for wireless sensor networks aimed at combining low computational complexity and high performance in terms of energy efficiency and reliability. The proposed approach relies on a packet-splitting algorithm based on the Chinese Remainder Theorem (CRT) and is characterized by a simple modular division between integers. An analytical model for estimating the energy efficiency of the scheme is presented, and several practical issues such as the effect of unreliable channels, topology changes, and MAC overhead are discussed. The results obtained show that the proposed algorithm

NETWORKING

2012

y in Wireless Sensor Networks Using a Simple CRT-Based Packet-Forwarding Solution

outperforms traditional approaches in terms of power saving, simplicity, and fair distribution of energy consumption among all nodes in the network.

21   Continuous Neighbor Discovery in Asynchronous Sensor Networks

Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular Web sites. Web site administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior—servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained

NETWORKING

2012

22 1.      Energy Efficient Routing Mechanism in Wireless Sensor Network

This paper gives a brief idea about wireless sensor networks and energy efficient routing in wireless sensor networks. Sensor networks are deployed in an ad hoc fashion, with individual nodes remaining largely inactive for long periods of time, but then becoming suddenly active when something is detected. Sensor Networks are generally battery constrained. They are prone to failure, and therefore the sensor network topology changes frequently. In this paper, we propose a routing algorithm for Wireless Sensor Networks combining Energy Efficient and Hierarchical based routing techniques which minimize the energy consumption, increase the lifetime of the sensor nodes and saves battery power.

NETWORKING

2012

23    Adaptive Opportunistic Routing for Wireless Ad Hoc Networks

A distributed adaptive opportunistic routing scheme for multihop wireless ad hoc networks is proposed. The proposed scheme utilizes a reinforcement learning framework to opportunistically route the packets even in the absence of reliable knowledge about channel statistics and network model. This scheme is shown to be optimal with respect to an expected average per-packet reward criterion. The proposed routing scheme jointly addresses the issues of learning and routing in an opportunistic context, where the network structure is characterized by the transmission success probabilities. In particular, this learning framework leads to a stochastic routing scheme that optimally “explores” and “exploits” the opportunities in the network.

NETWORKING

2012

24     The COQUOS approach to continuous queries in unstructured overlays. 2011

The current peer-to-peer (P2P) content distribution systems are constricted by their simple on-demand content discovery mechanism. The utility of these systems can be greatly enhanced by incorporating two capabilities, namely a mechanism through which peers can register their long term interests with the network so that they can be continuously noti_ed of new data items, and a means for the peers to advertise their contents. Although researchers have proposed a few unstructured overlay-based publish-subscribe systems that provide the above capabilities, most of these systems require intricate indexing and routing schemes, which not only make them highly complex but also render the overlay network less _exible towards transient peers. This paper argues that for many P2P applications implementing full-_edged publish-subscribe systems is an overkill. For theseapplications, we study the alternate continuous query paradigm, which is a best-effort service providing the above two capabilities. We present a scalable and effective middleware called CoQUOS for supporting continuous queries in unstructured overlay networks.Besides being independent of the overlay topology, CoQUOS preserves the simplicity and _exibility of the unstructured P2P network. Our design of the CoQUOS system is characterized by two novel techniques, namely cluster-resilient random walk algorithm for propagating the queries to various regions of the network and dynamic probability-based query registration scheme to ensure that the registrations are well distributed in the overlay. Further, we also develop effective and ef_cient schemes for providing resilience to the churn of the P2P network and for ensuring a fair distribution of the noti_cation load among the peers. This paper studies the properties of our algorithms through theoretical analysis. We also report series of experiments evaluating the effectiveness and the costs of the proposed schemes.

NETWORKING

2012

25 Secure Data Transmission In Wireless Broadcas

Wireless broadcast is an effective approach for disseminating data to a number of users. To provide secure access to data in wireless broadcast services, symmetric-key-based encryption is used to ensure that only users who own the valid keys can decrypt the data. With regard to various subscriptions, an efficient key management for distributing and changing keys is in great demand for access control in broadcast services. In this paper, we propose an efficient key management scheme, namely, key tree reuse (KTR), to handle key distribution with regard to complex subscription options and user activities. Key Tree Reuse has the following advantages. First, it

NETWORKING

2012

t Services With Efficient Key Management

supports all subscription activities in wireless broadcast services. Second, in KTR, a user only needs to hold one set of keys for all subscribed programs instead of separate sets of keys for each program. Third, KTR identifies the minimum set of keys that must be changed to ensure broadcast security and minimize the rekey cost. Our simulations show that KTR can save about 45 percent of communication overhead in the broadcast channel and about 50 percent of decryption cost for each user compared with logical-key-hierarchy-based approaches.

top related