Command-Line Tools
The following command-line tools are popularly used for system services and applications.
- Is-up-cli: It is an open source tool that allows provides the ability to the users to check the website status in a seamless manner. Only a single command is used to check whether the website is up or not. It also has a separate library that is used to tap into the API.
- Pageres-cli: It is a tool which is used to capture 100 screenshots from 10 different websites in less than a minute. The screenshots that are captured can then be used for work purpose. The tool can also capture different resolutions from different sources.
- Surge: It is a command-line tool that is required by the developers for publishing of the websites and web content comprising of HTML, CSS and JavaScript on web. A complete folder can be published with a single command using this tool (Stackify, 2017).
- Loadtest: It is necessary to check the performance of a website so that the customer expectations and requirements are fulfilled. Loadtest is a tool that performs load testing of a website and also allows an option of custom configuration. In-depth server testing is also carried out using this tool.
- Caniuse-cmd: This is a tool which is used to check for the errors and mistakes in a website to maintain the workflow. The availability of the features is also checked through the aid of different browsers.
- Moro: This is a tool which is used to check and track the amount of time that is required for the execution of a particular project. Tracking of the entire data sets is done for detailed reports.
- Motherboard: This is the main hardware component of a server which is used for connecting the different elements and components of a computer. There are various built-in features of a motherboard that are present called graphics adapter, network interface etc. In case of failure of a motherboard, the functioning of the entire server may get impacted. This is the reason that the motherboard shall be checked in case of the server failure.
- Processor/CPU: The processor is the main component of the computer system and server. It is one of the major components that impacts the system performance and behavior. Clock speed and processor cores of the processor are concerned with the processor. In case of server failure, the processor/CPU shall be checked and verified so that system performance can be measured.
- Hard Drive: There are a number of IDE drives that are connected with the computer systems. For the individual users, these drives are adequate. However, SCSI drive is used for high performance and control. Reliability and performance of the systems and servers are based upon hard drives (Dummies, 2017).
- Memory: There are many different types of memory that are present and are involved in the computer systems and servers. There are a number of servers that are present which can support memory capacity of 32GB. There can be failures that may take place in case of memory that may impact the server performance and failure as well.
- Network Connections: There are numerous network connections and equipment that is connected to the system and server for its functioning. In case of server failure, it is possible that the networking equipment is faulty.
- New Server Manager: This is the feature of Windows Server 2012 that allows the capability for the creation of the server groups. These server groups are the collection of different servers which can be easily managed and can provide enhanced user experience. There is an enhanced management of task and every server has a certain set of common attributes. The companies and organizations that do not have dedicated monitoring packages and software in place can make good use of this feature.
- Command-Line First, GUI Second: The previous versions of Windows Server came with an option to install the GUI first. However, in case of Windows Server 2012, the primary capability has been provided to install the core services and server. With this option, the GUI role can be used and installed after the installation of the server. With this manner, there are various advantages that are provided in terms of reduction of the attack surface, resource load and energy (Hassell, 2012).
- Enhanced storage space: The storage space that is involved and present in the case of Windows Server 2012 makes use of inexpensive drivers and controllers. There is a pool of storage that is involved which is then classified in different sections which can be used like regular disks. There is a lot of scalability and flexibility that is associated with the storage space that is used in this case (Brown, 2012).
- Dynamic Access Control: It is necessary to manage the data and information sets in a correct and accurate manner. Dynamic Access Control (DAC) is a feature that is present in the case of Windows Server 2012 which can enhance access control in the case of information sets. There are limited and minimal additions and enhancements that are made to the Active Directory. This feature provides the strong capability and abilities to manage the file systems with enhanced security and control (Savill, 2013).
- Resilient File System: New Technology File Systems (NTFS) was earlier used in case of computer servers and systems. Resilient File System (RFS) is an enhancement of NTFS for providing the properties and abilities as availability and integrity. There is a use of checksums and real-time allocations for the protection of sequencing and access of all the information and data sets. In case of occurrence of an issue or a problem that is identified in case of Windows Server 2012, there can be automatic repairs that can be made without having any impact on the availability of the data and information (James, 2012).
Capacity planning strategy includes the mechanisms and measures that must be included for the planning of capacity for satisfying the demand that is associated with the products and services over a specific period of time.
Strategy Scope
The capacity planning strategy that is necessary and is required to be developed for cloud virtualization must be ongoing. It is necessary to make sure that detailed inventory regarding the available space and application running on a particular infrastructure are maintained (Metron-athene, 2017).
In the case of capacity planning strategy for cloud virtualization, the set of services that will be provided by a particular environment shall be determined. The deployment of enough storage shall be done as a next step in order to fit the size of the environment (Tang, 2017).
Steps to Follow
There are various adjustments and steps that need to be followed for the creation of the capacity planning strategy for cloud virtualization.
- A detailed requirement of all the services that need to be deployed shall be prepared comprising of the operating system, availability requirements along with the database and planned applications. The supportability and compatibility of these services in the cloud environment shall also be assessed (Raei, 2017).
- It is then required to decide upon the storage and the implementation size that shall be used. The preferred and recommended strategy in this case is to make use of free virtualization technologies that may offer free storage in the beginning which may be expanded later on. There is also an option to make use of large storage environment with its utilization rate increasing with specific time intervals (Boydell, 2011).
- Networking and the requirements associated with network management and administration shall be assessed and determined thereafter. There are a plenty of ports and tools that shall be used for the purpose of networking. There are also networks that may be used for specific roles and services (Searchservervirtualization, 2017).
- Platform selection and decision on which platforms must be used and selected shall be taken as per the results of analysis.
There are a number of risk areas and problems that shall be identified and assessed in the area of capacity planning strategy.
There are risks areas that shall be identified in terms of performance and implementation issues along with technical and operational errors that may take place. The risk assessment shall be carried out to prepare a list of all such possibilities and a response strategy shall be created and implemented as well.
Tools to Troubleshoot Network Connectivity Problems
- Ping: This is a tool which shall be used for the verification of the device reachability of the networks. The information on the traverse time for ICMP data packets to allow the packets to reach to their respective destination can also be assessed. There may be connectivity issues in terms of differences in the round trip time for different devices on different networks. These issues can also be assessed and highlighted by making use of this tool.
- Traceroute: This is a tool which is very similar to Ping; however, the round-trip information that is present in between the source and destination IP address connected over a network can be obtained. The involvement of the various hops that are present and are connected can also be traced in this case so that the problems and issues in network connection and latency can be identified (Whatismyipaddress, 2017).
- SNMP Monitoring tools: These are the tools that can perform a variety of tasks and applications. The tasks associated with the detection of networking performance and checking of networking status along with baselining link utilization can also be carried out. The primary issue behind a poor network connection is also network bottlenecks that come up. These tools can also identify the issues around network bottlenecks. Hardware malfunctions associated with a network connection can also be identified using this tool (Networkcomputing, 2017).
- NetFlow: the SNMP monitoring tools can perform and detect the congestion and the various regions where the congestion may take place. However, the exact cause of the congestion can be highlighted through NetFlow. NetFlow analyzer tools find out and highlight the possible reason that is present behind the congestion which may include the involvement of a malware or the presence of a successful DDoS attacks and likewise. These tools can also determine if the traffic is critical or not.
- Protocol Analyzers: There are times and cases wherein all the other tools may fail and an advanced troubleshooting may be required to find out the exact cause behind network latency and connectivity issues. In such cases protocol analyzer tools can be used which dig in every packet to identify the reason behind the latency and issues in connectivity. Deep packet inspections can be performed to understand if the flow of packets is slow.
Software as a Service (SaaS)
Software as a Service (SaaS) is a cloud delivery model in which the applications are hosted and provided by a third party provider. Low base price and the ease of updates and maintainability that comes along with this model are some of the prime benefits associated with it. This model makes use of multi-tenant architecture that includes the feature of rolling out the same functionality to all the clients. Localization to support global applications along with the lack of maturity may be observed in case of SaaS model (Sharma & Sood, 2011).
Platform as a Service (PaaS)
PaaS is a cloud delivery model that hosts the required hardware and software on its own infrastructure. Decreased costs, automatic updates and assured compatibility are some of the benefits that are offered by the PaaS model. However, there are also certain drawbacks that are associated with this cloud delivery model. The drawbacks include lock-in and limited scalability. There is a certain degree of inflexibility that comes with this model which makes it unsuitable for application which may have frequently changing specifications (Nasr & Ouf, 2012).
Infrastructure as a Service (IaaS)
IaaS is a cloud delivery model in which the hardware and infrastructure is provided by a third party provider that is managed by the provider itself. This model does not include the need to manage the release or the infrastructure. The management of underlying data center is also not the responsibility of the organization. There are no infrastructural costs that are involved. There may be security issues that may be witnessed in case of IaaS cloud. The organization would also require maintenance and management of the software systems and solutions on their own (Li, 2013).
References:
Boydell, B. (2011). Chapter 7: Capacity Planning and Management. Retrieved 17 October 2017, from https://researchportal.port.ac.uk/portal/files/176643/BOYDELL_2011_pub_Ch7_Capacity_planning_and_management.pdf
Brown, M. (2012). Windows Server 2012: An Overview of New Features. Tom’s IT Pro. Retrieved 17 October 2017, from https://www.tomsitpro.com/articles/windows_server_2012-hyper-v-storage_pool-iSCSI_target_server,1-464.html
Dummies. (2017). Components of a Server Computer – dummies. dummies. Retrieved 17 October 2017, from https://www.dummies.com/programming/networking/components-of-a-server-computer/
Hassell, J. (2012). 10 Key Windows Server 2012 Features for IT Pros. CIO. Retrieved 17 October 2017, from https://www.cio.com/article/2393205/servers/10-key-windows-server-2012-features-for-it-pros.html
James, J. (2012). Top 10 Windows Server 2012 Features. Petri. Retrieved 17 October 2017, from https://www.petri.com/top-10-windows-server-2012-features
Li, C. (2013). Efficient resource allocation for optimizing objectives of cloud users, IaaS provider and SaaS provider in cloud environment. The Journal Of Supercomputing, 65(2), 866-885. https://dx.doi.org/10.1007/s11227-013-0869-z
Metron-athene. (2017). Virtual Capacity Planning & Management. Retrieved 17 October 2017, from https://www.metron-athene.com/virtual-capacity-management/index.html
Nasr, D., & Ouf, S. (2012). A Proposed Smart E-Learning System Using Cloud Computing Services: PaaS, IaaS and Web 3.0. International Journal Of Emerging Technologies In Learning (Ijet), 7(3). https://dx.doi.org/10.3991/ijet.v7i3.2066
Networkcomputing. (2017). Troubleshooting Network Latency: 6 Tools. Network Computing. Retrieved 17 October 2017, from https://www.networkcomputing.com/networking/troubleshooting-network-latency-6-tools/242797888
Searchservervirtualization. (2017). Virtualization capacity planning strategy guide. SearchServerVirtualization. Retrieved 17 October 2017, from https://searchservervirtualization.techtarget.com/tutorial/Virtualization-capacity-planning-strategy-guide
Raei, H. (2017). Capacity planning framework for mobile network operator cloud using analytical performance model. International Journal Of Communication Systems, e3353. https://dx.doi.org/10.1002/dac.3353
Savill, J. (2013). New Features in Windows Server 2012 R2. Windowsitpro.com. Retrieved 17 October 2017, from https://windowsitpro.com/windows-server-2012/new-features-windows-server-2012-r2
Sharma, R., & Sood, M. (2011). Enhancing Cloud SAAS Development with Model Driven Architecture. International Journal On Cloud Computing: Services And Architecture, 1(3), 89-102. https://dx.doi.org/10.5121/ijccsa.2011.1307
Stackify. (2017). Most Useful Command Line Tools: 50 Cool Tools to Improve Your Workflow, Boost Productivity, and More. Stackify. Retrieved 17 October 2017, from https://stackify.com/top-command-line-tools/
Tang, L. (2017). Joint Pricing and Capacity Planning in the IaaS Cloud Market. IEEE Transactions On Cloud Computing, 5(1), 57-70. https://dx.doi.org/10.1109/tcc.2014.2372811
Whatismyipaddress. (2017). Traceroute Tool. WhatIsMyIPAddress.com. Retrieved 17 October 2017, from https://whatismyipaddress.com/traceroute-tool