I Need to write 2 separate replies for each Discussion posts. Each reply should be 100 words. please Avoid Plagiarism
A substantive post will do at least two of the following:
- Ask an interesting, thoughtful question pertaining to the topic
- Answer a question (in detail) posted by another student or the instructor
- Provide extensive additional information on the topic
- Explain, define, or analyze the topic in detail
- Share an applicable personal experience
- Provide an outside source (for example, an article from the UC Library) that applies to the topic, along with additional information about the topic or the source (please cite properly in APA)
- Make an argument concerning the topic.
When it comes to data and rendering them for every user request, the organization needs to be so precise without causing performance issues. So instead of being answerable to the people’s queries about data, organizations are themselves finding the best technologies to give the best experience to the user for all their applications. Regardless of the technology they use regarding their product, they are indirectly relying on data and if the user flow is more, the conglomerate has to use big data for better outcomes. So, I would like to answer how each of these technologies is utilizing big data and how it is related to global computing.
On hearing the word sensor, I see we are already experiencing this feature where there is a ticketing system for parking. The technology which they use for scanning and printing the ticket is called Kafka and mechanically the sensors will work as a physical device. So, when the user comes for the parking ticket the system will record the data and allow the user to park the car. By the time of leaving, the bar code will be scanned, and proper estimation will be given. Once the payment is done user will get the notification and this is also because of the Kafka stream interacting with the database. This has decreased the manual effort of an individual and helped the organization indirectly for global computing.
On accessing or storing a large volume of data the network capacity should be in a way where no servers should fail the request of the user and have to handle all error scenarios as well. This is to indicate that the users are always getting proper data without any issue. So, to achieve this the organizations are using some advanced processors where it can handle the load when the request is being transferred from one network to the other network through some medium (Tsai, 2015). By doing this the user will never be frustrated and it will also help the organization for the betterment while utilizing the big data.
Storing the data securely and having a proper backup in the database was always an issue for the organizations but after big data has come into play it is different. Most of the unused data is being archived and also storage costs have been improved a lot. Big data helped the organizations to store a huge amount of data where the relational database was not able to handle it and also it brought ease of access to the users.
Cluster Computer Systems:
This big data helped the systems to have proper communication from one system to another system when they are connected through networks. Previously there used to some lack behind the scenes while transferring the data from one machine to another machine. With the help of big data, speed and accuracy have been improved a lot when compared to earlier.
Cloud Computing and Algorithms:
The role of big data here is, it has benefited the user and organization about the ease of access to their data. Previously there used to lot of hiccups in analyzing the data and it was tough for the organizations to know how to improve their business. The organizations were able to find the solutions quickly by this technology with proper analysis of going through logs. The data will sit on the cloud servers where the users can access it at any time and also as big data will accept a variety of data it can help the business to differentiate the required information and eradicate the unnecessary information (Assunção, 2015). On top of these algorithms will be very helpful to pile the data without having performance issues.
Assunção, C. (2015). Big data computing and clouds: Trends and future directions. Journal of Parallel and Distributed Computing, 79-80, 315. https://doi.org/10.1016/j.jpdc.2014.08.003
Tsai, L. (2015). Big data analytics: a survey. Journal of Big Data, 2(1), 132. https://doi.org/10.1186/s40537-015-0030-3
The growing importance of big data processing stems from advances in several advances:
Sensors – Digital information is produced from a wide range of sources, including computerized imagers (telescopes, cameras, MRI machines), composite and natural sensors (microarrays, eco-displays) and even large numbers of people and associations that create site pages (Brock et al., 2017).
PC Organizations: Data from a wide range of sources can be gathered into huge collections of information by means of limited sensor organizations, just like the Internet.
Information storage: Advances in innovating attractive circles have significantly reduced spending on information storage. For example, a terabyte card, containing a trillion bytes of information, costs about $ 100. As a kind of perspective, it is estimated that if the entirety of the contents of all the books in the Library of Congress could be edited into an advanced facility, equal to about 20 terabytes (Brock et al., 2017).
Bunch PC framework: Another type of PC framework, made up of thousands of “hubs,” each with a few processors and circles, associated by fast neighborhoods, has become the preferred team arrangement for scaled information logging frameworks. . These groups provide both the ability to generate huge data indices and the processing power to organize information, analyze it and react to inquiries from distant customers. Common and proven top register (e.g. supercomputer), where the emphasis is on increasing the coarse computational intensity of a frame, team teams aim to increase the unwavering quality and efficiency with which they can monitor and examine the indices of exceptionally huge information. The “trick” lies in the product calculations: Group PC frames are made up of huge quantities of modest equipment components, with adaptability, unwavering quality and programmability achieved by new programming standards (Brock et al., 2017).
Cloud Computing Offices: The rise of huge server farms and group PCs has created another plan of action, where organizations and individuals can rent storage space and log the limit, rather than making the huge capital speculations planned to develop and organize powerful PC establishments. For example, Amazon Web Services (AWS) provides organizations with open storage estimated in gigabytes per month and compute cycles estimated in CPU hours. Similarly, as a couple of associations work on their own capacity plants, we can anticipate a period when information storage and recording will become widely accessible public services.
Information investigation calculations – Big volumes of information require mechanized or semi-automatic scrutiny: procedures for distinguishing designs, recognizing irregularities and concentrating information. Again, the “trick” lies in the product calculations: new types of calculations, consolidation of factual research, rationalization and artificial reasoning can build measurable models from a wide variety of information and interpret how the facility should react to new information. For example, Netflix uses artificial intelligence in its suggestion framework, anticipating a customer’s interests by comparing your movie by visualizing the story against a factual model created from aggregate survey propensities for a large number of different customers (Rodriguez et al., 2018).
Brock, V., & Khan, H. U. (2017). Big data analytics: Does organizational factor matters impact technology acceptance? Journal of Big Data, 4(1), 1-28. doi:http://dx.doi.org
Rodriguez, L., & Da Cunha, C. (2018). Impacts of big data analytics and absorptive capacity on sustainable supply chain innovation: A conceptual framework. LogForum, 14(2) doi:http://dx.doi.org