We have been told that Google Chrome runs each tab in a separate process. Therefore a crash in one tab would not cause problems in the other tabs.
AFAIK, multi-processes are mostly used in programs without a GUI. I have never read any technique that could embed multiple GUI processes into a single one.
How does Chrome do that?
I am asking this question because I am designing CCTV software which will use video decoding SDKs from multiple camera manufactures, some of which are far from stable. So I prefer to run these 开发者_运维知识库SDKs in different processes, which I thought is similar to Chrome.
Basically, they use another process that glues them all together into the GUI.
Google Chrome creates three different types of processes: browser, renderers, and plug-ins.
Browser: There's only one browser process, which manages the tabs, windows, and "chrome" of the browser. This process also handles all interactions with the disk, network, user input, and display, but it makes no attempt to parse or render any content from the web.
Renderers: The browser process creates many renderer processes, each responsible for rendering web pages. The renderer processes contain all the complex logic for handling HTML, JavaScript, CSS, images, and so on. Chrome achieves this using the open source WebKit rendering engine, which is also used by Apple's Safari web browser. Each renderer process is run in a sandbox, which means it has almost no direct access to the disk, network, or display. All interactions with web apps, including user input events and screen painting, must go through the browser process. This lets the browser process monitor the renderers for suspicious activity, killing them if it suspects an exploit has occurred.
Plug-ins: The browser process also creates one process for each type of plug-in that is in use, such as Flash, Quicktime, or Adobe Reader. These processes just contain the plug-ins themselves, along with some glue code to let them interact with the browser and renderers.
Source: Chromium Blog: Multi-process Architecture
In this context, the fundamental design is interesting.
Here are the relevant design documents, in particular the multi-process architecture section.
An architectural overview:
I just gave the first answer (the one explaining 'browser' vs 'renderers' vs 'plugins' an uptick...that seems the most complete and makes good sense to me.
The only thing I'll add are just a few comments more about WHY Google's design is the way it is, and give an opinion about why it's always been my first choice for an overall/every-day browser. (Tho I realize that HOW (and not WHY) was the question being asked.)
Designing so that individual components have their code in separate processes allows the OS to'memory-protect' processes from accidently (or on purpose) modifying each other in ways not explicitly designed-in.
The only parts in such a design that can both read and write shared data are those parts that are designed to NEED to access that data, and allows control on whether that access is just 'read' access or 'read' and 'write' access, etc. And, since those access controls are implemented in the hardware, they are firm guarantees that the access rules cannot be violated. Thus, plugins and extensions from other authors and companies, running in separate tabs/processes, cannot break each other.
Such a design has the effect that it minimises the chances of changing some code or data that wasn't designed to be changed. This is for security reasons and makes for more reliable, less buggy code.
The mere fact Google has such an intricate design is, to me, good testimony to fact that Google seems to have an excellent grasp of these concepts and has built a superior product. (That said, as a web-developer, we still must test our web code with multiple browsers. And, browsers such as Firefox, having been around for a long time and having an excellent group of web-developer related 'add-ons' still has some advantages for some tasks.)
But, for everyday overall browser use, for almost all tasks, the Chrome browser has become my first choice. (Just my opinion, and of course, YMMV.)
Most of the work of rendering a web page is figuring out where exactly things go (i.e. where to place each picture, what color to render each piece of text). That work is done in a separate process. Once the separate process has figured where everything goes, it passes that information on to the main Chrome process which draws all of the elements on the screen.
It isn't clear exactly how your video sdk system is setup. But you could have one process that decompresses the video and another process that renders it to the display. Most likely however, you are using opengl or DirectX. Those APIs smay impose some limitations on how you split things up among different processes.
I came across an article, which I think can answer this question: https://www.chromium.org/developers/design-documents/gpu-accelerated-compositing-in-chrome/
Basically, the underlying techniques are IPC and shared memory. There are two rendering models: GPU accelerated and software rendering.
GPU accelerated
The client (code running in the Renderer or within a NaCl module), instead of issuing calls directly to the system APIs, serializes them and puts them in a ring buffer (the command buffer) residing in memory shared between itself and the server process.
The server (GPU process running in a less restrictive sandbox that allows access to the platform's 3D APIs) picks up the serialized commands from shared memory, parses them and executes the appropriate graphics calls.
software rendering
It is the old software rendering model in which the Renderer process passes (via IPC and shared memory) a bitmap with the page's contents over to the Browser process for display.
Window objects - the small, drawable rectangular areas used to implement widgets, not what the user sees as a window - can perfectly be shared between processes, using shared memory or the X protocol. Check your toolkit's docs.
精彩评论