Refinery Transforms

A refinery transforms crude oil into many different carbon-based and hydro-processed products. The crude oil is heated and each desired product is extracted at a given temperature. A lot of energy is needed to heat this crude. The energy needs are as much required in heating as in cooling and this at different stages of the process. The refining process is a perfectly mastered technology, but the problems remain in almost all the refineries; fouling and corrosion in various heat exchangers, thermal installations.

MERUS® is able to deal with problems, fouling in water systems and fouling related to hydrocarbons.

If there is enough supply of good quality water the problems in the cooling systems are minimized. But in some parts of the world, where there is not enough water, the cooling water causes a lot of scale, fouling and corrosion in the heat exchangers and piping. As most of the refineries are located by the sea, the water is used for cooling. This sea water is the source of a large number of additional disorders. While very few refineries use direct seawater for cooling their process, most use a battery of heat exchangers, where seawater is used to cool the cooling water.

We have identified the following major problems, where MERUS® is able to provide a solution:

Scale formation in pipes and machinery, including heat exchangers, oil coolers, lubricants and compressor coolers.

Corrosion of pipes and heat exchangers integrated into machines. If the corrosion is caused only by the cooling water, MERUS® is normally able to control this corrosion. If the corrosion is caused by impurities in the water coming from the process side, this corrosion is much more difficult and cannot in all cases be completely eliminated.

Biofouling in the cooling system. What we find in particular in warm regions of the world, and which is causing more and more problems.

Known problems when using seawater are marine growth such as barnacles and shellfish, blocking the flow of water.

MERUS® has solved the following issues in water applications:

MERUS® deals with individual heat exchangers, which are called to be critical for the process, with a strong tendency to foul and therefore a lot of regular cleaning required.

MERUS® takes care of the entire cooling water system, from cooling towers, piping systems and heat exchangers.

MERUS® decreases biofouling in the water system. This is only possible if the cooling water system is treated as a whole with MERUS® rings.

MERUS® treats wastewater systems, in particular to limit the formation of struvites.

MERUS® treats fire extinguishing systems by minimizing the phenomenon of corrosion.

When using Merus® water applications, where the cooling system is treated, there has been a significant reduction in water treatment chemicals. In some cases, there is no need for polluting chemicals at all. In addition, we have found that there is also the possibility of saving water by treating the cooling water system with less pressure drop.

We noticed, by comparing the chemical treatment versus the use of Merus® rings. That there is much less fouling in heat exchangers with Merus®, resulting in consistent cooling performance with less need for maintenance.

  improves the separation of hydrocarbons from water:

  will process tankfarms, where crude is stored with pre-treatment. In some cases, there is still a significant amount of residual water in the crude. This water causes problems later in the process, so it should be reduced as much as possible. The Merus® ring has shown its ability to improve the separation of crude and water, which should be possible in tanks in a tank farm as well. In addition, this hardness of the water causes corrosion in the tanks, which can be reduced by Merus® .

We have won over several customers, Merus® is increasingly called upon to maintain their cooling system.

We can offer you the opportunity to discuss with some of our customers, with whom we work under contracts from Saudi Aramco, KNPC, PetroSA, Q8, Samref.

Refining Edible Oils Technology

The process of refining edible oil Refining edible oils is a step-by-step process. Refining the oil removes phospholipids, pigments, foreign flavors, free fatty acids and other impurities. The process of any oil refining plant includes degumming, neutralization, discoloration, deodorization, and wintering processes. Chemical refining is done in order to remove the fatty acids contained in the crude oil which is extracted from the seeds. These are further neutralized by the use of caustic soda. This process makes it possible to remove sodium soaps by decanting the contents of the tank or by using centrifugal separators. Oils whose acids have been neutralized are then bleached and deodorized.

Rather than conduct a chemical refining, another method, the mechanical refining, can be used for the refining of edible oil. Thanks to this method, free fatty acids are removed by a distillation process with a single deodorization step. In order to achieve effective results, crude oil must be carefully degummed. However, this process does not apply to certain oils such as cottonseed oil. All kinds of refining methods are carried out using various equipment and machines, and these methods are all used to refine almost all types of oils that are extracted from oilseeds, such as sunflower, flax, sesame, mustard, and from legumes like peanuts, etc.

Mechanical and chemical refining processes are defined by the technology that is used. Mechanical refining is done through a degumming process during which the oil is released from its gum, a special method to separate free fatty acids during the steam deodorization process. As for chemical refining, it is done by using chemicals (acid-base neutralization) to free the oil from its free fatty acids. Subsequently, the gum and paraffin are separated by centrifuges.

  • Refining technology
  • Peculiarities of physical refining
  • High level of refinement; less oil loss
  • No wastewater discharge
  • More distilled free fatty acids 
  • Particularly suitable for very acidic oils, and those with low gum content
  • Special features of chemical refining
  • Excellent adaptability and less requirement for high quality oils
  • Refined oil is compliant and stable
  • Less bleaching clay required compared to physical refining
  • Process of an oil refining plant

With 10 years of experience in manufacturing and exporting complete oil mill plants as well as a variety of oil mill machinery, KMEC is an expert in edible oil refining. In the oil refining plant, there are several steps to be followed.

Take-off section

Take-off phase

With 10 years of experience in manufacturing and exporting complete oil mill plants as well as a variety of oil mill machinery, KMEC is an expert in edible oil refining. In the oil refining plant, there are several steps to be followed.

The Current Refining Topology Of Corporate Networks

There are three places where most businesses tend to deploy and manage their own applications and services:

On premise , where data centers house multiple server racks, where they are equipped with the resources to power and cool them, and where there is dedicated connectivity.

Colocation facilities , where customer equipment is housed in a fully managed building where power, cooling and connectivity are provided as a service

Cloud service providers , where customer infrastructure can be virtualized to some extent, and services and applications are delivered on a usage basis, allowing operations to be counted as operational expenses rather as investment expenses (opex vs capex).

Edge computing architects would be looking to add their design to this list as a fourth category: a category that takes advantage of the portability of containerized facilities with smaller, more modular servers, to reduce distances between the point of processing and the point of consumption of network functionality. If their plans come true, they seek to accomplish the following:

Potential benefits of edge computing

Minimal latency. The problem with cloud computing services today is that they are slow, especially for artificial intelligence workloads. This makes the cloud unusable for applications such as real-time forecasting of securities markets or piloting autonomous vehicles.

Processors located in small data centers closest to where they are used could open up new markets for IT services that cloud providers have not been able to address until now. In an IoT scenario, where stand-alone data collection device clusters are widely distributed, having processors closer to these device clusters could dramatically improve processing time, making analysis more difficult. real time possible at a much more granular level.

Simplified maintenance. For a business that has no difficulty in sending a fleet of maintenance vehicles to the field, Micro Data Centers (µDCs) are designed for maximum accessibility, with modularity and a reasonable degree of portability. These are compact enclosures, some small enough to fit in the back of a pickup truck, which can support just enough servers to house critical functions, and which can be deployed closer to their users.

Conceivably, for a building that currently houses, powers, and cools its data center in its basement, replacing all of that with three or four µDCs somewhere in the parking lot might actually be an improvement.

Cheaper cooling. For large data centers, the monthly cost of electricity used for cooling can easily exceed the cost of electricity used for processing. The ratio between the two is called the efficiency of energy utilization (PUE – Power usage effectiveness ). It is the benchmark measure of data center efficiency (although in recent years surveys have shown that some IT operators do not know what this ratio actually means).

Theoretically, it may be less expensive for a business to cool and condition multiple small data center spaces than a single large one. Additionally, due to the particular way some utility companies handle billing, the cost per kilowatt may drop for the same server racks hosted in multiple small facilities rather than a large one.

A white paper published in 2017 by Schneider Electric [ PDF ] assessed all the costs associated with building traditional data centers and micro data centers. While a company might incur just under $ 7 million in capital expenditures to build a traditional 1 MW facility, it would spend just over $ 4 million to facilitate the establishment of 200 5 KW installations.

An ecological device? There has always been a certain ecological appeal to the idea of ​​distributing computing power to customers over a larger geographic area, as opposed to centralizing that power in gigantic facilities, and using fiber optic links. .

The initial marketing of edge computing was based on the common sense impression that small facilities consume less energy, even collectively. But it is really difficult to know if this is scientifically proven. A 2018 study by researchers at Kosice Technical University, Slovakia [ PDF ], using simulated edge computing deployments in an IoT scenario, concluded that the energy efficiency of edge computing depends almost entirely on the accuracy and efficiency of the calculations performed there. The overheads generated by inefficient calculations, they found, would in fact be magnified.

While this all sounds like too complex a system to be feasible, it should be borne in mind that in its current form, the public cloud computing model might not be viable in the long term. Under this model, subscribers would continue to run applications, data streams and content streams through pipes connected to mega data centers whose service areas span entire states, provinces and countries.

What Does “Refining Edge Computing” Mean?

In any telecommunications network, the edge is the farthest point from the facilities and its services provided to customers. In the context of edge computing, the edge is where servers can deliver functionality to customers as quickly as possible.

With regard to the internet, data is collected from multiple servers, and is conveyed to the data center for processing. CDNs speed up this process by acting as “pump stations” for users. The typical lifecycle of network services involves this “round trip” process, where data is actually extracted, shipped, refined, and reshipped. And, as with any process that involves logistics, transportation takes time.

This simplified diagram of NTT shows CDN servers injecting themselves between the data access point and the users. From the perspective of data or content producers, as opposed to delivery players, CDNs are near the end of the supply chain – the penultimate step in fact before the data reaches the end of the supply chain. ‘user.

Over the past decade, major CDN vendors have started to introduce IT services that reside at the point of delivery. Imagine that a gas station could be its own refinery, and you get the idea. The value proposition of this service depends on the perception that CDNs are not at the center, but at the periphery of the system. It allows certain data to bypass the need for long distance transport.

The trend towards decentralization

If CDNs have not yet proven the effectiveness of edge computing, they have at least demonstrated its business value: companies pay to have certain data processed before it reaches the center, or “core”, of the network. .

“We’ve had a pretty long period of centralization,” said Matt Baker, senior vice president of Dell Technologies. “As we seek to deliver more and more real-time digital experiences through our digital transformation initiatives, the ability to maintain this highly centralized approach to IT is starting to fracture.”

Edge computing is touted as one of the lucrative new markets made possible by 5G technology. For the transition from 4G to 5G to be economically feasible for many telecom companies, the new generation must open up new revenue channels. 5G requires a new and vast network of (ironically) wired fiber-optic connections to provide transmitters and base stations instant access to digital data (the backhaul ).

Therefore, a new category of IT service providers has the opportunity to deploy multiple µDCs adjacent to RANs, perhaps adjacent to or sharing the same building with telecom operators’ base stations. These data centers could offer cloud computing services to selected customers, at competitive rates and with features comparable to large-scale cloud computing providers such as Amazon, Microsoft Azure, and Google Cloud Platform. .

Ideally, perhaps after a decade of evolution, edge computing would provide fast service to customers located near their base stations. We would need huge fiber optic pipes to provide the necessary backhaul, but the income from edge computing services could in theory finance their construction, making them profitable.

Service level objectives

Ultimately, the success or failure of the edge computing data centers will be determined by the ability to meet service level objectives (SLO – objective service level). It is the expectations of customers who pay for the services, as codified in their service contracts.  f an edge deployment isn’t significantly faster than a large-scale deployment, then edge computing is dead.

“What are we interested in? It’s application response time,” said Tom Gillis, senior vice president of VMware. “If we can characterize how the application responds, and look at the individual components that work to provide that response, we can actually begin to create a self-healing infrastructure.”

Reducing latency and improving processing speed should work in favor of SLOs. Some point out how the wide distribution of resources over an area contributes to service redundancy and even business continuity – which, at least until the pandemic, were seen as one or two day events, followed by periods recovery.

But there will be balancing factors, including upkeep and maintenance. A typical Tier 2 data center can be maintained, in emergency circumstances (such as a pandemic), by only two people on-site, with support staff off-site. A µDC is designed to operate without being constantly maintained by personnel. Its built-in monitoring functions constantly send telemetry data to a central center, which could theoretically be in the public cloud. As long as a µDC meets its SLOs, there is no need to monitor it from a maintenance standpoint.

This is where the viability of the edge computing model still needs to be thoroughly tested. As part of a typical data center vendor contract, an SLO is often measured by how quickly the vendor’s staff can resolve an outstanding issue. In general, resolution times can remain low when personnel do not have to travel by road. If an edge computing deployment model is to be competitive with a colocation deployment model, its automated resolution capabilities should be very good.

The Refining Potential Pitfalls Of Edge Computing

Nonetheless, a computing world completely rebuilt on edge computing is about as illusory, and remote, as a world without oil. In the short term, the edge computing model faces significant obstacles, many of which will not be easy to overcome:

Availability of the necessary electrical power. Servers capable of providing remote, cloud-based services to enterprises, regardless of their location, require large processors and in-memory data storage systems to enable multi-location. They will also probably need to be powered by three-phase electricity . And this is difficult, if not impossible, in rural areas .

And the base stations of telecom operators have never needed this level of power until now. The only reason to modernize the power system would be for edge computing to be viable.

Switch to network slicing. For the switch to 5G to be feasible, telecommunications operators must earn additional income from edge computing. The idea to link the evolution of edge computing to 5G arose from the idea that business and operational functions could coexist on the same servers – a concept introduced by the Central Office Re-architected as a Datacenter (CORD) ( initially “Re-imagined”), one form of which is now seen as a key enabler of 5G .

The problem is that it may not even be legally valid for telecommunications network operations to coexist with customer- facing functions on the same systems – the answers depend on the ability of the legislator to understand the new definition of ” systems “. Until this day (if it ever happens), 3GPP (the industry organization that governs 5G standards) has adopted a concept called “network slicing”, which is a way of slicing servers. telecommunications networks in virtual servers at a very low level, with much greater separation than in a typical virtualization environment like VMware.

It is conceivable that a customer-oriented slice of network could be deployed for edge computing on telecommunications networks. However, some large companies prefer to take charge of their own network slicing, even if this means deploying them in their own premises – moving edge computing to their own premises – rather than investing in a new system with a value proposition. relies heavily on hope.

Telco operators who will defend themselves against new entrants (who are their customers). If the 5G radio access network (RAN) and the fiber optic cables connected to it are to be used for commercial services, a gateway must be set up to pass the traffic of private customers towards that of the traffic of telecom operators. The architecture of such a gateway already exists [ PDF ], and was formally adopted by 3GPP. Her name local breakout , and also part of the official declaration of the ETSI standardization body on the edge computing multi-access (MEC – multi-access edge computing ).

Technically, this problem has therefore been solved. The problem is that some telecommunications operators may have an interest in preventing the establishment of this system for economic reasons. And to favor the hosting of this data in their own datacenters.

The current topology of the Internet network has three levels: Level 1 service providers only exchange information with each other, while level 2 Internet service providers are generally in contact with customers. The third level allows small Internet service providers to operate at a more local level. Global edge computing could become the catalyst for public cloud-like services offered by ISPs at the local level. But this presupposes that the telecommunications operators, which manage level 2, are prepared to let incoming network traffic be distributed over a third level, thus allowing competition in a market that they could very easily claim for themselves.

If the location of data matters to the business, the hyperscale, centralized, and power-hungry nature of cloud data centers can end up working against them as smaller, more agile, and more profitable operating models emerge in business environments. more widely distributed places.

 generated by devices and sensors. Indeed, the data rates of 5G networks, as well as the increasing use of data by customers, will require mobile base stations to become mini-data centers. “