SPECIAL OFFER! Join ISA now and get the rest of 2023 Free.

  • By Bas Mutsaers
  • Cover Story
Mining and IT-OT convergence
Relevance of the various ISA standards to productivity, analytics, security, and digital technology adoption in mining

By Bas Mutsaers

It seems like a long time ago that we started talking about information technology (IT) - operational technology (OT) convergence, wireless technologies, and a mature approach to industrial safety. For those in the mining industry, however, we could still benefit from a stronger adoption of ISA standards to help accelerate the productivity journey.

Is it the right time to adopt new ways?

ISA has recently formed a Mining and Metals Division. It will be interesting to see what this will mean for the communication, standards, and viewpoints that are generated around mining. In recent months, some parts of the industry have started to turn the corner. Many resource shares are up, and most commodity prices are up strongly due to continued demand. This is evidenced in metallurgical coal share prices. It is also significant that iron ore and precious metals prices have recovered from their recent lows.

Through the downturn, capital reduction and the low availability of excess cash has forced the productivity agenda at a level not seen before in mining. There has been a tremendous focus on the productivity of current assets. As a result, the cost per ton for most of the volume in the market has been strongly reduced.



Figure 1. Volume increases by the majors have pushed out of the market.
(wmt CFR = wet metric ton cost and freight)


 

Commodity prices have now returned to their long-term trend because larger companies have focused on increasing the volume of their current assets. This is depicted by the yellow areas in figure 1. This comes at the cost of the smaller, less profitable assets that are naturally pushed to the right. As a result, given their position on the curve, these assets are not attractive in a saturated market, even at the right cost point. The companies in the upper side of the cost curve (in red) typically do not have the same leverage to adopt a productivity agenda. They are typically smaller (or older) assets that are not as flexible or as efficient as large tier 1 ore bodies. These companies often have less access to power, water, or transport, or are of a higher complexity (ore body or processing needs). Several of these assets may also be nearing the end of their life, and it simply does not make sense to sink any more money into them without an increased market demand or a lower cost base.

Capital reduction

We have seen plenty of examples of entities that are now priced out of the market and in temporary "care and maintenance" or up for sale. Equipment inventories are extremely low because of significant capital reduction programs forced by the recent commodity prices. Many sites are operating both their mobile fleet and their comminution plant with older equipment and have renewal rates below their depreciation value. That cannot last, and investment will have to return when demand increases.

Temporary underinvesting in maintenance-related drivers like safety and downtime needs to be much more closely monitored and controlled to prevent surprises. These factors are ever more important to lean and efficient production and the license to operate. Also, autonomous operation and maintenance-even from within the pit-is on the road map or already in execution with most major companies, a trend that cannot be stopped. Investors have growing expectations for a high level of automation to keep people away from the most dangerous tasks, which helps reduce the risks of assets in their portfolio.

The need for good reporting and intelligence is greater than ever to support an efficient operation, not only targeted for "maximum production," but also for "maximum efficiency at variable throughput levels." This puts organizations in position to serve more demanding clients at the right price point and quality without giving away product.

It is these high-performing operators that are adjusting their business models to be connected to up- and downstream production. It is a reason that ISA already has great supporters from the mining community. This community can see the benefits of leveraging the various standards available through ISA. Take, for example, the integrated coal power plants that autonomously consume coal based on the "operations schedule," and therefore their site power nominations (figure 2). For gold companies, it is all about tracking the gold production at each stage of the process.

Whatever the commodity, the predictability goes up by linking the production schedule to key input variables of the mine as well as to the desired output variables that inform the commodity trading and risk management function. With consumers increasingly demanding to know how things are produced, the relevance of the ISA standards is ever growing to help meet the promise of the Internet of Things (IoT)\, smart grids, or the increasing demands for value chain flexibility. Solving such constraints through ISA-95 standard adoption is likely to accelerate your outcomes.


 

Figure 2. ISA-95 operations schedule per B2MML 6.0


 

Adopt ISA95 modeling

How should companies go about it? Having an experienced team can help beat nameplate levels. Mature teams that have seen more "unexpected" cases and scenarios are most efficient at returning their plants to a desired state. As mining naturally has a lot of variability across many parts of the value chain\, the adoption of software, technology, business intelligence (BI), and analytics is key for the success of many mining companies. For these companies to be in position to control their entire process, their IT/OT teams need a language to communicate their various data and information needs. This is where the ISA-95 standard has made a lot of difference.

Traditionally, IT has owned the business planning functions (the enterprise resource planning [ERP]/office domain) at Level 4 per the standard (figure 3). The operational technology team has traditionally owned at least Levels 0 through 2 and often Level 3, depending on IT's level of interest in supporting the production process. The adoption of the standard has helped create interfacing and has led to sound but traditional reporting solutions.


 

Figure 3. The ISA-95 standard addresses horizontal and vertical integration of activities in the value chain through the adoption of a level for each activity.


 

Some companies have now started automating the data management side of these reports by using robotic process automation (RPA), which helps finish weekly, monthly, or yearly reporting within a subset of the time when automating 60-80 percent of the manual work.

The Level 2 software solutions in the standard address process and equipment control functionality, but minimal product control functionality. Level 3 solutions often address the more complex activities and processes like scheduling, but solutions in this space have traditionally not been as available and reliable when compared to process control system/supervisory control and data acquisition solutions at Level 2. Level 4 business planning and logistics software solutions traditionally come with some reporting functionality, but need proper integration with L0-3 per the standard for these reports to be accurate and timely.

Over time, users at each level create various reports for every role, activity, or unit operation. Also, there is likely a version for every time horizon (hour, day, week, and month). When adopting the ISA-95 standard, your enterprise architecture teams have a solid starting point for achieving a connected organization and are likely to have sufficient control over their overall process.

ISA-95 through Business to Marketing Manufacturing Language (B2MML) addresses the most efficient (minimal) interactions between typical operations and business processes, which it calls "activities." The other main benefit of the ISA-95 standard is that the integration of the desired "production request" and capacity of the pit, plant, rail, or loading arm "segments" can be instructed and adjusted in time. In ISA-95 this is defined as "the production schedule," where the "work schedule" shows the availability and progress on the production floor.

The intent of the standard is that it links product and equipment availability as well as product constraints and quality constraints (like hardness and grade) simultaneously through applying one or multiple local or global models, depending on the level of implementation and maturity. High-performing operators schedule and predict product parameters from drilling, hauling, and sampling in the pit as accurately as possible, so that the product hitting the comminution plant is already well understood. This helps the shift operators and metallurgists of the plant maximize the asset.

By having the best data possible "at hand" at each stage of the process, the (maximum) productivity of the asset is ultimately achieved.

Further, to keep the plant up and running, people and asset availability can be considered, along with monitoring or even requiring the competencies or capacities to perform specific activities. For example, drilling only continues when a complete team is underground to perform the activity safely. If someone in that example leaves a certain area underground, the schedule is informed by the dispatching function (e.g., VIMS), and resulting actions prevent blasting or drilling functions from being performed.

In the case of remote or even fully autonomous operations, there needs to be a lot of data available to make sure all activities are under control. It is the reason that analytics is currently high on the agenda of many operators.

Analytics

The model and scenarios define the typical desired mining value chain relationships, as well as the various domains via a layered and modular model. For IT enterprise architects, it is how the Open Group Architecture Framework (TOGAF) from the Open Group describes architecture development (ADM), functional business architecture, systems architecture, and technology architecture.

Both TOGAF and the ISA-99 standard aim to create separate domains for specific functions or "activities," as the ISA-95 standard calls them. From a safety perspective, the difference between the two is that the ISA-99 standard is specifically created around the OT domain, because industrial automation has specific security requirements and challenges. Based on this, many traditional operators would say that standards like these have been sufficient to design, build, run, and maintain an effective mining business. But not anymore!

More and more data is now available. Through superior analytics, users can pinpoint where to look for additional productivity gains across the value chain. Analytics solutions allow them to consider many additional variables that traditionally could not have been considered or afforded within the control, manufacturing execution system, or even ERP domains. The power and flexible architecture of this technology also allows starting small or scaling up by combining data from many domains and even across multiple sites. Third parties, like original equipment manufacturers, benchmark their equipment information at a significant level of fidelity, and, in some other areas, live trading information is at hand to be used. In the past, the lack of affordable computing power simply did not allow value chain modeling at this granularity. The combination of big data with artificial intelligence (AI) opens a whole new range of possibilities. As this technology is maturing, there will still be significant investment in this space, which will be required to attain the next level of productivity and safety across the value chain-holistically.

The industry today is still modeling product and process variables locally for what one would call unit or cross-unit optimization, such as ball mill optimization, grinding, or flotation.

Sometimes these functions are performed in multivariable controllers in the process control layer. In other times, the models sit in Layer 3 or 4 per the ISA-95 standard for other functional or historical reasons. In the past couple of years, there has been an increased investment and uptake of this kind of model after organizations execute an initial analytics proof of concept (PoC), which is often aimed at proving its potential to traditional leadership. Still, it is worth asking the question given the analytics hype: How many companies crossed the chasm and have performed global rollouts of models in each of their core functions?

From this perspective, mining is in the "early adopters" phase, where energy, banking, telecommunications, and retail industries might be a step ahead, because the majority have adopted analytics. For mining processing, the energy industry (upstream and refining) is likely to be where complex and mature models already exist and where the basic unit operation functions are more repeatable and predictable. Directly applying these models in the mining industry is not always possible, because in mining variability appears across the value chain in different areas. This is often overlooked when automation or software vendors want to bring in solutions from other industries. The attributes and variabilities of different commodities also result in the need to address a quite different process flowsheet, therefore posing the biggest challenge for today's tier 1 mining companies that traditionally have aimed for a diverse portfolio to reduce financial exposure and risk.

How to start getting the right data?

Targeting the core activities that all the assets have in common is a good starting point when looking for additional productivity-and is part of the intent behind the ISA-95 standard. Another benefit of the standard is that the definitions help with communication via a common language. Complex, mining-specific naming conventions do not allow engineers to collaborate with analysts or collaborate with programmers or mining, equipment, and technology specialists (METS). These professionals are all essential in running today's mines, but traditionally do not all have a strong IT background.

Already the ISA integration definitions for the main integration points between core activities is well defined by B2MML interfaces that typically use XML. These interfaces help accelerate how mining systems should be communicating, both vertically and horizontally across the IT/OT solution architecture.

In addition to the benefits of the standard, there are major technology advances in analytics and process control, where OPC-UA is now applied efficiently for secure data communication across domains. Earlier protocols had integration and security challenges, because they were not natively designed to solve that issue.

Is the answer to productivity directly creating one large, global, connected model of your entire value chain?

The answer is "no." Analytics projects in early adopter mining companies have typically been more successful when approached locally before trying a more global approach. After performing a couple of focused PoCs, it becomes clearer how to scale and adopt models into the various functions for a more mature automated and global approach that considers parameters applicable to the entire value chain. Teams need to learn how to collaborate and determine what the data means before adjusting their processes.

When team confidence is going up, it is still wise to start with a low-fidelity approach to see the cross value-chain effects of specific local process set point changes on the overall target functions before any heavy lifting is considered. A light "global" model with variables like rate and grade and maybe residence time of production is a good starting point before considering the many other possible variables. With at least these parameters as a baseline, it will become more obvious where model changes or oversimplification might be causing suboptimization.

Light control model

The light control model, therefore, is a good baseline for comparing the progress of the overall objective functions before investing more analytics dollars in larger and more complex multivariable (machine-learning) models. Having such a baseline is a good guide to see if you can achieve still more value when increasing the fidelity of your sub- or global models. If after some early wins, the "value per analytics dollar invested" decreases, there is still the ultimate challenge of starting to connect to even larger enterprise models that go beyond specific assets.

A good example of such an enterprise model is blending from multiple iron ore sites (e.g., Pilbara, Hunter Valley, and Minas Gerais) to optimize product giveaway initially with stockpile inventory, but gradually reducing inventories and costs as the models run faster and better. This allows companies to change from traditional produce-to-stock models to just-in-time models. Such models only run well if they do not dilute the net present value and long-term strategy of your operation.

Another example of an enterprise model is modeling shared rail capacity with production efficiency in mind. Allocating rail capacity dynamically for multiple end users based on current status and flexibility of the value chain against contract variables and service levels is a great area to explore with this new technology.

In the future, these kinds of "super models" could have an even larger overall impact on revenue instead of optimizing product volume, quality, or grade for any one specific site. Even more value can be captured for both producers and end users if real-time slot booking (including penalties for any over or under delivery) can be applied over time, as is the case in other industries (e.g., oil terminals and refineries).

Challenge

Capturing such complex challenges is potentially not far away now that combined technology and software solutions are starting to scale and are more open in nature, and now that initial good proof of concepts have already captured significant value. There have been some strong potential advantages for the more recent greenfield assets to adopt new technology that has more and better sensor data. Those greenfield sites have typically adopted different and more dynamic reporting systems and architectures.

You would expect these sites to be positioned for better analytics outcomes. However, from a traditional mining perspective, these newer sites often struggle to ramp up to nameplate levels. They often do not have a mature team or do have a predictable canonical and functional/role-based reporting solution to fall back on, as IT budgets have gone to the latest and greatest analytics architectures and minimal money is spent on traditional reporting.

Two-level approach

There is a strong case, therefore, for starting a lightweight hybrid reporting model and a best-practice analytics architecture. Mining analytics initiatives also require mining experts to be more than a little computer savvy when working with big data servers and various nontraditional client tools to perform regression, clustering, decision trees, and nearest neighbor machine-learning algorithms (to mention just a couple).

The mature teams that have been "playing in the sandpit" for a while start seeing how everything is hanging together for key bottlenecks in the pit or plant. For mining experts to be efficient in such an environment and achieve productivity results requires some "hands on the tools" and training to use various machine-learning models.

It takes time for these teams to understand the basic requirements of first having high-quality data and using one common language to help point (with respect) "creative analytics propeller heads" in the right direction. If they focus only on analytics tools, it can take a while before the process is understood from the data.

Context

When IT and analytics teams are remote, it is hard for them to put data in the right context. Optimization projects performed remotely from the site take much more time or very significant (and therefore expensive) experience to make significant findings. "Not knowing" the physical process of the plant and its variability issues is still the top reason why many analytics and improvement projects fail.

When good piping and instrumentation diagrams, process control charts, and alarming limits are at hand and the physical design of the asset is available in two- and three-dimensional drawings or models, being remote will likely be much less of an issue. Still, there is a strong benefit of being local. Experience shows that engineering artifacts of brownfield sites are not always kept up to date, and plants continuously keep changing to achieve or improve nameplate levels. Engineering design databases and drawings are a great source for analysts, who often need to call upon site experts to understand issues or physical plant limitations.

It is a pity that traditionally engineering, procurement and construction (EPC) and engineering, procurement, and construction management (EPCM) do not have much opportunity to jointly put effort in a longer start-up phase with operations. Teams have to rely on process control vendors or system integrators who typically are not as intimate with the metallurgy and geology of the reserve and are more focused on hardware, software, and traditional unit control than on product control. This is an opportunity for the industry as a whole.

Collaboration with the METS

There must be the realization that having METS expertise in improvement teams (for both vendor and operator) will have a huge impact. It is the experienced METS who can point out why issues arise through the (in)consistency of the metallurgy, geology, geo-technology, and geo-chemical functions and how data and initial models can relate to functions like planning, scheduling, operations, maintenance, and potential downtime.

An integrated team that talks to technology vendors, but is also intimate with the process, IT/OT technology, and even the softer side of people and change processes will help the best companies adopt change rapidly and truly achieve high performance. Forced by the productivity agenda, the majors are making inroads in creating some of these cross-functional teams. To them, it will soon be clear how production variables relate to energy and water use, as well as how inputs relate to safer and more efficient operations. They will soon better understand how these parameters cause downtime directly or indirectly.

The teams that are persistent in their journey will ultimately find the physical or natural limits of equipment and its relationships to ore attributes like hardness. As in other industries, the innovators and early adopters will soon be followed by the early and late majority that will include many of the tier 2 players that are also starting to invest in this capability. By then, software solutions and the processes that lead to fact finding, interpretation, and modeling for this industry will have matured.

Embedding best practice

Significant challenges are still ahead before this can be achieved. Recent research shows that only 50 percent of analytics projects give a significant and expected upside.

Even though some have achieved real savings, many projects only have achieved below-average results. Some other projects have been put on hold halfway through until more process and ownership within the organization is defined.

Starting with poor data has also been a major reason for this. Additionally, projects like these are initially seen almost as technology projects "for vendors to prove" and the industry to benefit. After a vendor-led approach, the teams often go back to old habits when the project is finished without embedding the new processes and learning points into their business. Therefore, like with quality and safety, analytics capability will soon be a responsibility for many-not only for the team that gets to play with it through the first initiatives and PoCs. Companies will need to involve vendors that provide the technology, software, and methodology, but will have to put their own governance into place to make analytics part of their core operating model and business processes and systems. Not many resource companies are already at that level of maturity when it comes to analytics.

Theoretical approach

Another factor that slows down adoption and progress is that analytics is approached very differently from traditional reporting and BI, so should not be treated the same. Unlike reporting that is often predefined and without too many surprises, analytics projects bring up many theoretical ideas and suggestions. Often suggestions that theoretically make sense do not make good business sense. Practically implementing findings in the current investment climate and brownfield environment to capture the potential results is not always feasible.

Some findings are likely to be more structural in nature. Through traditional reporting and BI, the "easy stuff" has already been found in this industry that has existed for many centuries.

To make money in the mining industry, there has always been the need to truly understand the mining process. This has in the past led to efficient mining operations, and this will also be the case in the future. It is likely, though, that a higher level of automation and machine-learning models will overtake complex functions that geologists, lab analysts, maintenance personnel, and operations teams deal with today. We will see central models and decentralized submodels just because vendors will want to differentiate their technology products and services.

Therefore, investing in analytics and IoT technology (i.e., providing the sensors, the data, and sometimes the control) is investing in understanding the process before deciding how to ultimately model and automate it for improved productivity, downtime, and cost reduction. Only companies that go all in on the analytics journey will find all these dimensions and will be able to run their operations at the lower cost percentile that this technology opportunity opens up.

By defining realistic benefit targets and acceptable risk levels, companies will make the expected progress. Some will benefit significantly from their adjustments and will define a low-cost and flexible operation that can respond to live commodity trading and risk management market data.

Currently it is only analytics technology that has the potential capacity to take those factors into consideration, so high-fidelity models can be put in place to react to any leading and lagging indicators of the enterprise for any time horizon-live.

Does that put other current software vendors out of business? Absolutely not. The IT and OT processes still must be run and controlled, but it will be the vendors that have the flexible road maps, open software, and technology that clients will be looking for. These vendors can accommodate this new intelligence and help end users embed repeatable models into their software libraries. This puts these vendors first in rank to be the productivity partners of the future.

Keeping it secure

Finally, mining will only be sustainable if current and new company intellectual property is protected. After all, mining involves significant money. With production highly affecting the daily share price, people will try to have an edge, and therefore mining systems have increasingly become a target for hackers.

Miners cannot sit back and wait for this to pass, as their growing range of autonomous equipment and complex expensive technology will need to be kept in the right hands. Intrusions need to have minimal impact, if only to keep public trust and the license to operate.

It will be interesting to see how the adoption rate of ISA standards like ISA-99 is combined with other cybersecurity and enterprise architecture standards and how IT and OT environments might merge into one seamless working environment without any visible vertical layers or visible horizontal siloes.

The awareness of the benefits is there, and these standards have now reached the boardrooms of most of the major organizations, where previously only a handful of subject-matter experts would have. The cloud by now has far matured, and more OT systems are cloud-enabled, where IT systems have already gone this way in recent years. In this area, software vendors are leading the way, and clients are still playing catch up.

It will not be decades before we see the first operations that leverage only one secure seamless and connected modular environment without all the layers that we need today to achieve real-time performance and security. Maybe ISA should create an initiative that aims for the next level in collaboration with software vendors, technology vendors, and end users.

Until that time, I cannot wait to see the benefits of a truly connected and optimized enterprise.

Reader Feedback


We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.



Like This Article?

Subscribe Now!

About The Authors


Bas Mutsaers has been an ISA member since 2005 and is a board member of the new ISA Mining and Metals Division.