Introduction to Energy Efficiency in Large-Scale Distributed Systems
This research paper is a study of the process of document vending application in development and implementation. It is notable they be will be other online prepayments vending application that that will also cover a number of vending application related terms. This will aid to avoid duplication as well as practical. Nonetheless, a number of organization that works online businesses and vending systems related processes are not accustomed to with prepaid particular descriptions. As a general rule of thumb, particular documents vending related definitions and terms will be included in this paper. Related definitions such as token technology and algorithms will be incorporated in the paper however it will be described in non-technical language to be mainly comprehended in the context of document vending application
Technology is increasingly being used as an alternative way of sharing documents for information sharing. Many organizations including media companies, institutional libraries, journal databases and other similar organizations have shifted their focus to tap into the growing online demand for vending documents. To develop a successful document vending system, however, a careful design of an Architecture of Multi-Tiered model is necessary. While most of the existing models provide an equally useful platform for the execution of the technology, implementation of the model in the powerful Java Programming Language provides developers with an entry opportunity to tap into the on-demand smartphone use.
The framework sees the Internet as a set of users who are linked globally via a low-speed connection. Servers comprising applications files, like, WWW are situated on the far side of the connection. All servers’ accesses (to these applications) are diverted over a gateway. There two major parts that make up the gateway, the remote gateway on the side of the server connection and the local gateway on the client’s side of the connection as demonstrated in Figure 1 below.
System Model: Figure 1
- During the implementation of the framework, the system does not have any control of over any system or software past the remote gateway. Specifically, WWW server application cannot be modified
- The side of the user has a solitary gateway to the entire Internet via which all the processing of all HTTP requests is made.
- The connection of the local gateway is through a connection that is of low bandwidth (like 64 kilobits/sec) to a remote passage.
- Just one single local gateway links the remotely located gateway.
- All none WWW stream of traffic on the connection is disregarded.
Most of the existing models in the market are presented over web-based applications written in HTML and other web-based languages. Although web-based access of files and documents for sharing and vending might seem more excellent for their ease of implementation, they present a challenge when it comes to interactive operation, especially in low network areas (Uehara, 2004). Java programming language, on the other hand, provides a robust technology for application in both developments of applets and Android devices. It seems, therefore, reasonable to redevelop these models using Java technology.
Design and Implementation of Multi-tier Authentication Scheme in Cloud
Web-based document vending systems presents a serious interactive access for their users. There is need to create a client-server system that is implemented in a language that is cross-platform and allow easy development of an easy interaction between the users and the server.
Numerous services on the web, for example, Web, FPT, and the Usenet can be the ideal arrangement of files, which are stored in a particular of servers, and can be accessed to from particular customers. The main critical issues confronting the Internet clients the long response times, which are often, experience today. This is because of:
- Load on the servers comprising the documents;
- Communication lines are loaded via which the documents are conveyed
The issue is particularly articulated for sites that are linked to the entire Internet by generally low-speed connections. Most clients, for example, customers of a topographically disengaged provider of Internet services, medium organization, and dial-up clients have critical impediments to transmission capacity. In spite of the fact that the accessibility of fast connections is expanding, the expanded movement volumes created by new applications bring about these connections staying congested.
The vending application attempts to lessen the general latency of document access for clients getting low-speed Internet, intensely stacked connection. The latency of such a framework is made out of three parts:
- Latency between the user and the local end of the connection.
- The latency of the link between it two ends of the connection, because of low connection speed and connection blockage.
- Latency between the remote end of the connection and the server, because of the response time of the server, and congestion somewhere else on the Internet.
The main objectives of this project are to-
- Design a smartphone based application
- Control online sales
- Controller smartphone interfacing
WORA – A slogan by the company that invented java that translates to Write Once, Read All and is aimed at showing the cross-platform power of Java Language.
J2EE – An environment for building applications and deploying java-centric applications online that are not dependent on any platform.
ODC – Offline Database Construction. A database search implementation that focuses on retrieving data by using properties of stored data.
This design and implementation propose for the development of a java applet that will allow the functioning of a document vending system based on a multi-tier client-server architecture model. The applet will be able to be downloaded to a wide range of platforms by utilizing the platform independence of the Java Programming Language and the principle of WORA (write once, read all). The client-server system will allow users to look up for documents from the server and store them in a cache. The user will then be able to remove items from the cache. The user will only be able to access the document once they complete their payment.
Similar systems have been designed and implemented in the past, even though some of the systems are implemented in a different language. The effort discussed here is the implementation of client-server systems architecture in different systems other than distributed systems.
A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
Brugali, Menga, & Guidi-Polanco (2010) implemented a framework based on the J2EE technology; a Java platform that allows for the implementation of the three-tier architecture. The J2EE identifies three tiers of applications implement on its platform: Client tier, middle tier and the backend tier (Brugali, Menga, & Guidi-Polanco, 2010). However, the availability of the platform makes the development of this project easier than the one proposed in this paper because the implementation of the database is limited to design from scratch rather than utilization of existing frameworks.
Singh & Singh (2012) similar architecture for implementation of cloud authentication cloud system is developed using the same technology. The paper proposes an implementation of a multi-tier authentication system for access to a cloud database. In spite of the different conceptual basis, this paper can provide insight into the methodologies applied in designing the architecture for this paper’s proposal.
They are two methods commonly utilized to implement proxies are caching and mirroring. A server that stores part or all of the files from one or more other servers is referred as the mirror. Users link to the mirror and get files from it rather than obtaining them from the main site. FTP servers are popularly implementing these mirrors, and a number of majority FTP servers have a chain of mirror sites in different geographic areas. Mirroring encounter various drawbacks, which are listed below:
- At periodic intervals, the files are mirrored. Thus, some files might be out of date in the mirror.
- Most mirrored files will never, or rarely, be accessed. In this way, transfer speed utilized for duplicating documents from the main to the storage space and mirror servers, on the mirrors, are not utilized.
- Clients will not know about the best mirror site to utilize and will access to unsuitable sites, invalidating any bandwidth saving.
- Only a small portion of the files will be stored on the Internet by the mirror. Consequently, clients cannot rely on the mirror for access to all required files.
For these mirrors to be powerful, it will comprise a good percentage of the often-accessed documents, be linked to the users with a high-speed connection, and be very much publicize all through its target group. Hensa Unix site in the U.K has effectively implemented this.
To enhance response time caching is extensively in the (WWW). World Wide Web. All files accessed are channeled through a cache and is generally configured as a proxy. The cache stores duplicates of local files which are accessed recently. Consequently, the access times will decrease of the regularly accessed documents, as they are locally served.
The first proxy cache was CERN HTTPD server. Introduction of hierarchical caches was through the Harvest caching framework, with a request of being sent to caches from the other the other local caches. In addition, the cache can be run on a web server and speeds up response times, since it acts as an accelerator. A broad network of various leveled caches can be relied upon to lessen server loads and response times drastically. Likewise, the Netscape HTTPD server incorporates.
Document Vending Application Development and Implementation
Multi-tier architecture is widely used in developing models for providing access to databases. Most e-commerce, which is a close resemblance to a document vending system, is built on the basis of three-tier architecture but on the Browser/Server model. The argument given for this is mostly that the model results in systems that are easy to maintain (Liming Ma, Sanxing Cao, & Xiao Ma, 2012). A similar architecture is can be used in the development of a client/server model. Although it is expensive, the applications developed are much faster compared to the one built on Browser/Server model because they can capitalize on the use of local resources.
The design model is an intellectual caching system that will outline the accessibility patterns of the documents like website pages and tries to get them before client’s request. The cache system consists of a gateway, which is divided into two ends of a low-speed connection. Documents accessing pattern are designed heuristically as well as probabilistically.
In figure 2 below it also shows the necessity of considering of having an abort request. Figure 1 above of the system can be drawn-out to demonstrate this although it has been left out to avoid distracting from the main points.
Figure 2:
There four main processes involved in this system
Document recording: The user request is received and stored in log (URL, users IP address, time and date). Direct sending document procedure to be ready to send the file to the client from the local cache.
Document request: Document that is requested might be in three different states
- Incomplete document or no document present ( high priority of request document. In the case of the request made, there is an increase in the level of priority
- Updated, complete files are in the reserve. (idle)
- Expired document (this is the case if the document is changed)
Sending the document: Get the instruction from the record request to forward the document. In addition, it checks whether any section of valid copies of the files is in the caches. Hence, transmission is done to every user who demanded it. Gives a message to the client informing about the delay of large files.
Purge cache: Eliminate pages based on previously requested time and expiry date.
Within the local gateway, the document request process is the most complex. The figure three below consists of a detailed operation and the main processes.
Figure 3: the local gateway
The request doled a priority, in the values N (highest priority) and 0 (lowest priority), whereby N is a framework parameter. A real client re will be assigned high priority N. History logs will be stores to parameters for each document which indicates how regularly document ni was requested for following that archive.
Necessity of Multi-Tiered Model for Document Vending Systems
The capacity for allocating priorities to any pre-empted demand for archive I should:
- A numerical value is returned between 0– N-1
- Increases monotonically with reference to.
- An increase of monotonically with respect to.
An administrative duty is required to occasionally reset securities for the archived data. The update is done by dividing and rounding them and expelling any repetitive entries.
A frequently read as well as periodically changed for example newspaper content changes daily hence it is appropriate for the client to be up to date. Assessing the history access and file log will direct how frequent the documents modify when it is changed and how frequently it is accessed. The system evaluate if it is necessary to update a document from these statistics.
- Forwarding documents to the clients
- Decompress and receive documents
- Maintain a history of all the previous requests and record all clients
- Determine which documents are essential for the predicted update requests.
- Transmission of predicted and updated request for documents to the remote gateway.
- Update and purge a cache of currently loaded documents.
The components of the system that are used to transfer messages are as indicated in the diagram below
Figure 4: Protocols overview
The proxy receives client HTTP request at the local gateway over link 1. Requests are filtered out which cannot be controlled by the local gateway (these are requested which are not relevant). They are forwarded to a server that is directly connected to link 3. Using the gateway protocol the relevant request are transmitted towards the remote gateways.
Any remote gateway demand is processed by a Relay Procedure. The HTTP connection to the server it will be set up if the request is not in the cache to draw the document in link 4. When the file is accessed, the RDP (Record Document process) reads it and the SD (Send Document) will transmit the file to local Gateway.
Only on the request TCP 1,3, as well as 4 connections are formed while TCP 2 linking is permanently made.
The section has critically analyzed the application protocol and cache which are which combines the strength of caches and mirrors in a reduction of the response time of internet applications. The section has a diagram to demonstrate both the caching system and the application protocol detailed operation and the local gateway.
This section of the paper will address various approaches used to implement the document vending system and procedure that are employed.
The program is presented as a compiled Java application, ready to be downloaded and can either work as an independent mobile application or a Java applet application. The application will consist of a client/server model built with the multi-tier architecture. In the implementation, the following strategies are proposed:
- evaluates of the application in a three-tier system.
- Implementation of the program via proper programming of the logical component
In this implementation, a process of documentation searching called the Offline Database Construction (ODC) (Rahman, Winarko, & Wibowo, 2017). The process involves the extraction of important information about the document and storing them in a cache. The properties retrieved. The point of this process is to uniformly present documents in a database for ease of retrieval.
Proposed Framework for Java- Based Document Vending System
Figure 5
In order to reduce workload and increase the efficiency of the application, all the processes of document retrieval in the vending machine are to be performed. The documents are to be compressed so as to load faster and retrieve easily. The features of the documents must also be easy to query for the customer (Rahman, Winarko, & Wibowo, 2017).
Figure 6
The server side of the application has no such processes. This is to keep the computational load as light as possible to reduce the processing time.
Figure 7
Pre-fetching of documents is a noteworthy element of our framework not available in customary caching frameworks, in the reckoning of client demands. This will result in an expansion in the heap on the connection for transmitting potentially un-required records, which might appear to be ridiculous if the connection is heavily loaded. In any case, it gives execution upgrades for two motives.
- Pre-fetch demands are usually at a lesser need than client demands. In this way, they never cause deferrals of records got for client demands. We watch that unless connections are intensely jammed, there are interims amid that it sits without moving. Pre-fetched documents can be communicated amid these interims.
- Pre-fetched documents are stored in the remote entryway. In this manner, after they are asked for by a client, they might be transmitted instantly, immediately. This fundamentally quickens access to moderate servers, particularly for in-line pictures.
The correspondence between the entryways, particularly the messages with respect to variations in the area, causes a heap on the connection. In any case, since the vast majority of this traffic is from the nearby to the remote entryway, and the greater part of the information streams the other way, this does not significantly affect execution. Also, requests are amassed to lessen the heap on the connection.
Preferably, the local reserve must store all records at any point got to by customers, and the remote reserve must store these in addition to all pre-fetched documents. Practically, the reserve might store just a few days worth of demanded records, and just a couple of minutes in terms of pre-fetched documents.
The history list, nevertheless, requires numerous more sections; it is proposed no less than a half a year worth. This enables to anticipate access designs even for records that have been cleansed from the reserve. Maintaining history data for one million URLs (around a half year worth) would just need on the request of one hundred MB of disc space that can be viewed as sensible.
Conclusions
The present model does not yet allow to make firm determinations about the strategies portrayed previously. Detailed investigation and testing are required keeping in mind the end goal to decide the relative qualities and shortcomings of every one of the procedures sketched out in this paper. Having exhibited their attainability, an item could be produced for execution that is more extensive.
However, there are various different manners by which the framework could be expanded, and these are depicted below.
The most likely extensions that should be made to the framework are to:
- one remote gateway should be connected to several local gateways,
- one local gateway should be connected to multiple remote gateways,
- Systems should be chained together in a network or a hierarchy.
In the designing phase, efforts have been met to guarantee that the framework is extendible to the above situations.
Wide analysis and experimentation should be done to set framework parameters and improve algorithms. Cases comprise:
- The work for deciding the need for the anticipated demand.
- The calculation for deciding lowest_p, the need for demands at the remote gateways.
A procedure should be introduced to exchange user behavior and history in the reserves and to as well as from servers.
The procedure must be lengthy to handle requests beyond WWW like FTP and News.
Current pressure, algorithms, chiefly focus on content. Encourage calculations are required for high-determination pictures, sound, and constant information.
The framework should monitor singular client’s conduct. Cases include:
- The extent of in-line pictures, which every client loads, can be incorporated into the expectation algorithm,
- Pre-fetching of records should be done under the client’s control.
- Parts of archives, which are not yet required, should be loaded at only lower priority.
References
Brugali, D., Menga, G., & Guidi-Polanco, F. (2010). A Java Framework for Multi-Tier Web-Centric Applications Development. Web Technologies, 1(3), 1745-1767. doi:10.4018/978-1-60566-982-3.ch094
Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., … & Zhang, Z. (2015). Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274.
Liming Ma, Sanxing Cao, & Xiao Ma. (2012). A hybrid model for application development and deployment based on the multi-tier architecture. IET International Conference on Information Science and Control Engineering 2012 (ICISCE 2012). doi:10.1049/cp.2012.2298
Pierson, J. M., & Hlavacs, H. (2015). Introduction to Energy Efficiency in Large?Scale Distributed Systems. Large-Scale Distributed Systems and Energy Efficiency: A Holistic View, 1-16.
Rahman, A., Winarko, E., & Wibowo, M. E. (2017). Mobile content-based image retrieval architectures. 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI). doi:10.1109/eecsi.2017.8239111
Singh, M., & Singh, S. (2012). Design and Implementation of Multi-tier Authentication Scheme in Cloud. International Journal of Computer Science, 9(5), 1694-1814.