< JDIM

Volume 3 Issue 2 June 2005

Managing Trust in Peer-to-Peer Networks

Wen Tang, Yun-Xiao Ma, Zhong Chen


Abstract The notion of trust is fundamental in open networks for enabling peers to share resources and services. Since trust is related to subjective observers (peers), it is necessary to consider the fuzzy nature of trust in peers for representing, estimating and updating trustworthiness in peer-to-peer networks. In this paper, we present a fuzzy theory-based trust model for trust evaluation, recommendation and reasoning in peer-to-peer networks. Fuzzy theory is an appropriate formal method for handling fuzziness that happening all the time in trust between peers for practical purposes, and fuzzy logic provides an useful and flexible tool by utilizing fuzzy IF-THEN rules to model knowledge, experiences and criteria of trust reasoning that people usesin everyday. Read More


A Peer-to-Peer Workflow Model for Distributing Large-Scale Workflow Data onto Grid/P2P

Kwang-Hoon Kim, Hak-Sung Kim


Abstract In the workflow technology literature, so far, very-large scale workflow architectures, systems and applications have been looking for distributed computing infrastructures maximizing their performance and efficiency, but using minimal resources. Almost all conventional workflow systems are based upon the client-server and clustering computing environments. However, according to that Grid/P2P is hot-issued as a very feasible infrastructure for very-large scale information systems we need to explore some reasonable approaches for applying the Grid/P2P as an infrastructure of very-large scale workflow systems. This paper ought to be one of those trials for seeking the-how being fitted very well into the nature of the Grid/P2P.


Semantic Data Integration Framework in Peer-to-Peer based Digital Libraries

Hao Ding, Ingeborg T Sølvberg


Abstract This paper presents our approaches in integrating heterogeneous metadata records in Peer-to-Peer (P2P) based digital libraries (DL). In this paper, the advantages of adapting P2P network over other approaches are to be presented in searching information among moderate-sized digital libraries. Before we present the semantic integration solution, we describe the P2P architecture built in JXTA protocol. By adopting JXTA protocol, peers can automatically discover the other candidates which can provide most appropriate answers. Such feature is realized by the advertising functionality which is introduced in the query process in the paper. As to the metadata integration, since resources may adopt distinct metadata, standardized or non-standardized, we employ the most widely adopted Dublin Core [17] as the globally shared metadata to sponsor the interoperation. This paper also describes the mechanism of applying inference rules to convert heterogeneous metadata to local repository.


Using J2EE/NET Clusters for Parallel Computations of Join Queries in Distributed Databases

Yosi Ben-Asher, Shlomo Berkovsky, Eduard Gelzin, Ariel Tammam, Miri Vilkhov


Abstract In here we consider the problem of parallel execution of the Join operation by J2EE/.NET clusters. These clusters are basically intended for coarse-grain distributed processing of multiple queries/business transactions over the Web. Thus, the possibility of using J2EE/.NET clusters for fine-grain parallel computations (parallel Joins in our case) is intriguing and of practical interest. We have developed a new variant of the SFR algorithm for parallel Join operations and proved its optimality in terms of communication/execution-time tradeoffs via a simple lower bound. Two variants of SFR algorithm were implemented over J2EE and .NET platforms. The experimental results show that despite the fact that J2EE/.NET are considered to be platforms that use complex interfaces and software entities, J2EE/


Data Quality Management in a Database Cluster with Lazy Replication

Cecile Le Pape, Stephane Gancarski- Patrick Valduriez


Abstract We consider the use of a database cluster with lazy replication. In this context, controlling the quality of replicated data based on users’ requirements is important to improve performance. However, existing approaches are limited to a particular aspect of data quality. In this paper, we propose a general model of data quality which makes the difference between “freshness” and “validity” of data. Data quality is expressed through divergence measures from the data with perfect quality. Users can thus specify the minimum level of quality for their queries. This information can be exploited to optimize query load balancing. We implemented our approach in our Refresco prototype. The results show that freshness control can help increase query throughput significantly.They also show significant improvement when freshness require-ments are specified at the relation level rather than at the database level. Read More


Content-Aware Segment-Based Video Adaptation

Mulugeta Libsie, Harald Kosch


Abstract Video adaptation is an active research area aiming at delivering heterogeneous content to yet heterogeneous devices under different network conditions. It is an important component of multimedia data management to address the problem of delivering multimedia data in distributed heterogeneous environments. This paper presents a novel method of video adaptation called segment-based adaptation. It aims at applying different reduction methods on different segments based on physical content. The video is first partitioned into homogeneous segments based on physical characteristics. Then optimal reduction methods are selected and applied on each segment with the objective of minimizing quality loss and/or maximizing data size reduction during adaptation. In addition to this new method of variation creation, the commonly used reduction methods are also implemented. To realize variation creation, a unifying framework called the Variation Factory is developed. It is extended to the Multi-Step Variation Factory, which allows intermediary videos to serve as variations and also as sources to further variations. Our proposals are implemented as part of a server component, called the Variation Processing Unit (VaPU) that generates different versions of the source and an MPEG-7 metadata document.


Content Adaptation in distributed multimedia system

Girma Berhe, Lionel Brunie, Jean-Marc Pierson


Abstract New developments in computing and communication technology facilitate mobile data access to multimedia application systems such as healthcare, tourism and emergency. In these applications, users can access information with variety of devices having heterogeneous capabilities. One of the requirements in these applications is to adapt the content to the user’s preferences, device capabilities and network conditions. In this paper we present a distributed content adaptation approach for distributed multimedia systems. In this approach content adaptation is performed in several steps and the adaptation tools are implemented by external services, we call them adaptation services. In order to determine the type and sequence of the adaptation services, we develop an adaptation graph based on client profile, network conditions, content profile (meta-data) and available adaptation services. Different quality criteria are used to optimize the adaptation graph.


Reducing Communication Overhead over Distributed Data Streams By filtering Frequent Items

Dongdong Zhang, Jianzhong Li , Weiping Wang , Longjiang Guo, Chunyu Ai


Abstract In the environment of distributed data stream systems, the available communication bandwidth is a bottleneck resource. To improve the availability of communication bandwidth, communication overhead should be reduced as much as possible under the constraint of the precision of queries. In this paper, a new approach is proposed to transfer data streams in distributed data stream systems. By transferring the estimated occurrence times of frequent items, instead of raw frequent items, it can save a lot of communication overhead. Meanwhile, in order to guarantee the precision of queries, the difference between the estimated and true occurrence times of each frequent item is also sent to the central stream processor. We present the algorithm of processing frequent items over distributed data streams and give the method of supporting aggregate queries over the preprocessed frequent items. Finally, the experimental results prove the efficiency of our method.


An Original Solution to Evaluate Location-Dependent Queries in Wireless Environments

Marie Thilliez, Thierry Delot, Sylvain Lecomte


Abstract The recent emergence of handheld devices and wireless networks has provoked an exponential increase in the number of mobile users. These users are potential consumers of new applications, such as the Location-Dependent Applications (LDA) examined in this article. As their name implies, these applications depend on location information, which is used to adapt and customize the application for each user. In this article, we focus on the problem of information localization, particularly the evaluation of Location-Dependent Queries (LDQ). Such queries allow, for example, a mobile user who is in an airport to locate the closest bus stop to go to the university. To evaluate these queries, the client position must be retrieved. Often, positioning systems such as GPS are used for this purpose; however, not all mobile clients are equipped with such systems and these systems are not well suited in every environments. To remedy this lack, we propose a positioning solution based on environment metadata, that can provide an approximate client position, sufficient for evaluating LDQs. This paper presents both the positioning system, and its optimization with regard to minimizing response time and economizing mobile device resources.


Energy Efficient Cache Invalidation in a Mobile Environment

Marie Thilliez, Thierry Delot, Sylvain Lecomte


Abstract Caching in mobile computing environment has emerged as a potential technique to improve data access performance and availability by reducing the interaction between the client and server. A cache invalidation strategy ensures that cached item at a mobile client has same value as on the origin server. To maintain the cache consistency, the server periodically broadcasts an invalidation report (IR) so that each client can invalidate obsolete data items from its cache. The IR strategy suffers from long query latency, larger tuning time and poor utilization of wireless bandwidth. Using updated invalidation report (UIR), the long query latency can be reduced. This paper presents a caching strategy that preserves the advantages of existing IR and UIR based strategies and improves on their disadvantages. Simulation results prove that our strategy yields better performance than IR and UIR based strategies. Read More


Composing Optimal Invalidation Reports for Mobile Databases

Wen-Chi Hou, Chih-Fang Wang


Abstract : Caching can reduce expensive data transfers and improve the performance of mobile computing. In order to reuse caches after short disconnections, invalidation reports are broadcast to clients to identify outdated items. Detailed reports may not be desirable because they can consume too much bandwidth. On the other hand, false invalidations may set in if accurate timing of updates is not provided. In this research, we aim to reduce the false invalidation rates of cached items. From our analysis, we found that false invalidation rates are closely related to clients’ reconnection patterns (i.e., the distribution of the time spans between disconnections and reconnections). We show that in theory for any given reconnection pattern, a report with a minimal false invalidation rate can be derived. Experimental results have confirmed that our method is indeed more effective in reducing the false invalidation rate than others. Read More


Copyright© 2016 Journal of Digital Information Management (JDIM)