Objects and concurrency | object oriented Programming

 2 Objects and concurrency

There are many ways to describe objects, concurrency, and the relationship between them. This section discusses several different perspectives (definitions, systems, styles, and models) that together help create a conceptual framework for parallel object-oriented programming.


1 Parallelism Like most computer terms, "parallelism" is difficult to define. Informal parallel programs are programs that perform multiple tasks at the same time. For example, a web browser can simultaneously issue an HTTP GET request to receive HTML code. Turn pages, play audio clips, display the number of bytes received from the image, and start a warning dialogue with the user. However, this simultaneity is sometimes an illusion. On some computer systems, these different operations can be performed by different CPUs, but on other systems, they are all performed by a single time-sharing CPU. The CPU switches between different operations fast enough that they can see They are executed at the same time, or at least not at the same time. Interspersed with certainty. , The observer. A more precise (if not very interesting) definition of parallel programming can be expressed in operational terms: the Java virtual machine and the underlying operating system and #40; OSand #41; provide apparent parallelism to physical parallelism (across multiple processing Processor) or mapping without parallelism, so that independent activities can run in parallel when possible and needed; otherwise, run in time. Parallel programming in the Java programming language involves using Java programming language constructs for this purpose, rather than system-level constructs used to create new operating system processes.


By looking at the nature of some common types of concurrent applications, you can best understand concurrency and the reasons for using it: Web services Most socket-based Web services (such as HTTP daemons, servlet engines, and application servers) are It is multi-threaded. The main motivation for supporting multiple simultaneous connections is to ensure that new incoming connections do not have to wait for other connections to complete. This usually minimizes service delays and improves availability. CPU is available. The goal here is to maximize performance by taking advantage of parallelism in I/O processing. Even on nominal serial computers, devices, hard drives, cables, etc., Work independently of the CPU. Parallel programs can consume time that would otherwise be wasted waiting for slow I/O, thereby making better use of computer resources. simulation. Concurrent programs can model physical objects with independent behavior, which is difficult to capture in purely sequential programs. GUI-based application. Although most user interfaces are intentionally single-threaded, they are often installed or interact with multi-threaded services. Concurrency allows users to control response to even lengthy operations. Component-based software. Granular, advanced software components (provided by design tools such as layout editors) can create internal processes to support accounting, provide multimedia support, increase autonomy, or increase productivity.Platforms like the java.applet package execute uploaded code in a separate thread as part of a set of guidelines that help isolate, track, and control the impact of unknown code. Multiple components, each component continuously responds to external input from sensors or other devices, and generates external output in time. As defined in the Java™ language specification, when the correctness of the system depends on actions, the Java platform does not support strong real-time control. A specific runtime system can provide some of the strictest guarantees required by real-time systems that are critical to security. However, all JVM implementations support smooth real-time monitoring, where punctuality and performance are seen as QoS issues rather than problem-solving (See 1.3). This reflects the portability goals that enable the JVM to use modern agile technologies, versatile hardware, and system software.




2 Concurrent constructions Threads are just one of several constructions that can be used to execute code at the same time. The idea of ​​creating new actions can be attributed to one of several abstractions along the granular continuum, reflecting the trade-off between autonomy and cost. Flow-based design does not always provide the best. The solution to the concurrency problem Choosing one of the alternatives discussed below can provide more or less security, protection, fault tolerance and management control, more or less related overhead. ) The impact on design strategies is greater than the impact on all the details surrounding them. Computer system If you have a large number of computer systems, you can assign each logical execution unit to a different computer. , Or even a group of machines that use a general operating system as a unit for management, which provides complete autonomy and independence.Each system can be controlled separately from all other systems; however, the cost of creating, discovering, querying, and routing messages between these objects can be high, thereby eliminating the ability to share local resources and eliminating names, security, Fault tolerance and other issues. Compared with parallel programs, recovery and availability are relatively difficult, so this type of mapping is usually only applied to those aspects of the system that inherently require a distributed solution. Multiple processes. 2 Process A process is an abstraction of an operating system, which enables a computer system to support multiple execution units. Each process is usually a separate running program; for example, a running JVM. Like computer systems, processes are not so much a physical abstraction as they are a logical abstraction. For example, you can dynamically change the reference from the process to the processor. The operating system guarantees a certain degree of independence, non-interference, and security between processes running at the same time.Processes are usually not allowed to access each other's location (although there are usually some exceptions), but must communicate using inter-process communication (such as pipes). This always means proactively reducing time: periodically suspending processes so that other processes can be executed. Compared with desktop solutions, the amount of work involved in creating, managing, and exchanging data between processes is much less. However, because processes share basic computing resources (CPU, memory, I/O channels, etc.), their autonomy is low; for example, a machine crash caused by one process will kill all processes.


Comments