Interview – Arch Systems’ Andrew Scheurmann and Tim Burke

Arch Systems take an inno­vative approach to factory optimization, using data mining to identify untapped utilization across factories, systems, and lines. Here to tell us more is the compa­ny’s CEO, Andrew Scheurmann, and Tim Burke, the cofounder and CTO.

Arch SystemsFor those viewers that don’t know Arch Systems, can you tell us a little bit about the company and what you do, Andrew?

Andrew Scheurmann: Yeah, happy to. Arch Systems is a venture backed company focused on data, machine data and analytics, and specifically in the electronics manufacturing space. We focus on surface mount technology lines, but we also do some work in the back end mechanical machines, such as injection molding, et cetera. The main activity that we’re involved in is extracting rich data from a large variety of machines, new and legacy, to power next generation analytics. Sometimes this is just directly collaborating with other tools in the fac­tory like the MES, but for the most part it’s creating new categories of analytics, such as analyzing the machines’ utiliza­tion, what they’re really achieving versus capacity, and exactly how to improve that in as an automated way as possible.

For the legacy equipment, do you have your own sort of middleware that you put in there, or are you using third-party stuff?

Tim Burke: We have our own suite of ways to connect to various machines, anything from using the vendor provided APIs that are available on each individual machine up through finding log files that are present on the machine, but maybe not exported, typically used, to providing our own IOTile based modular hardware when required. So the whole suite of anywhere from standardized hardware through proprietary interfaces through files and databases down to the POC or hardware level as needed.

So what makes you different? What makes Arch Systems different from a lot of these companies are offering MES systems and line production software systems?

Andrew: In terms of the MES, we like to say that the MES runs the factory and Arch observes and improves the factory. So you can see these as two fundamental parts overall. Sometimes when people ask how are we different, they maybe compared it to, okay, there’s all these other analytics platforms out there, so many technologies. And compared to that cate­gory we are industry focused. So we’re an end-to-end vertical solution that collects data, analyzes it, and improves electronics manufacturing specifically. If you compare it to the MES, like you just said, then it’s more about, we’re not involved in grabbing the work order, the material, backflush completing things, and that day to day running of the factory, but instead getting the data to observe everything that’s happening and being able to quantify how much efficiency is being lost this reason versus this reason, and then drilling down to exactly how to improve that.

Some of those might be what we call tactical improvements, which could happen even inside the MES, so things that you fix in real time or fix day by day. And some of them are strategic improvements, maybe even something as complex as not just the changeovers inside a factory, but changing how you think about your equipment and products being run over multiple factories, which is something you would change on order of months, not days or weeks even.

Tim, you mentioned that you were extracting the information off of legacy equipment that was already in the factories. But can you not only extract the data that’s available on these legacy machines, you can’t really take stuff that’s not there. Is that right? You refer to doing something called machine data mining. What do you mean by that?

Tim: In terms of machine data mining, so as you say, Trevor, you can’t make data that’s not there. And what we have found is that there’s actually, even on legacy machines, typically a wide array of data that’s there about what they’re doing. So they don’t know things about other machines, maybe they don’t have advanced pre-packaged ways of reporting their activities in a nice, easy to consume file. But when you get down into it, they know all of the details of what work they perform because they have to do it. So even if it’s the oldest pick and place machine in your factory or the oldest solder printer, it still knows that it’s print­ing, how it’s printing, it’s able to control pressures, has servos and motors.

And so what we’ve found is actually that you can substitute, so if the machine reported the exact data you want to see just as a number, like my health, my health is right now 50%, now it’s 40%, now it’s 20%, then the analytics job would be easy. You just get that number from the machine, you say, “When it’s less than 10% I replace the machine,” the machine tells you, prob­lem solved. With these legacy machines, what we’re able to do by having so many machines connected both of new and older kinds but doing similar jobs and similar roles in the factory, is we’re able to actually build up that simplified, processed data set of, “Here’s my health,” or, “Here’s an error code that’s really important.” But even if the machine doesn’t report it directly, there’s enough ancillary details that when com­bined with the right algorithms and domain knowledge about how a factory works, how an EMS factory works specifically, how these machines work specifically, we can actually tease out those very simple answers even though the data reported from the machine is highly detailed but very low level and these legacy machines.

 Andrew: Could I just add something right on the end of that?

One of the things that maybe folks, would be helpful as they think about this, when you think about the most advanced machines in the factory, they’re also pulling from low level data, the PLCs, the IOs, basic sensors, and then they build up a machine model inside and maybe pres­ent a nice error code. In some cases, let’s say there’s a temperature sensor and that temperature sensor is just not inside the older machine. Okay, then maybe you’re fundamentally limited. But in many cases that lower level data is in fact there but it’s not being put together. And so one of the things that makes us unique is the ability not just to build that cloud algorithm, a machine learning type, but to build also the algorithms or signal processing that is done from low level machine data into usable, a machine model. And that helps bridge that gap between new and legacy.

Who else is doing machine data mining and looking for all this untapped utilization?

Andrew: There’s a lot of companies that are doing the kind of meta method we are, mining data, not necessarily electronics, factory machine data, but mining data and looking for untapped utilization and pro­cesses. Just to mention one, I don’t know if the viewers will be as familiar because a lot of these are new companies, but one is called Solonus. They’ve raised $350 million, but they mine data out of the ERP as opposed to out of factory machines, and they look for untapped utilization in, say, a payments process, how many invoices are being slow versus the speed you could have moved. And the same kind of method, data, untapped speed, untapped oppor­tunities is being played out in marketing tools and in sales tools, so all different areas of the enterprise’s technology stack.

And it’s not as common yet in factory machines, largely because of this legacy problem, because getting the data is in fact harder. I don’t mean only getting any data, but as you said, Trevor, getting the So there’s getting the data, which is hap­pening everywhere. There’s getting the data into a usable form, which is already easy in a lot of pure software realms, but that part’s pretty hard in the factory. And then there’s mining untapped utilization and opportunities. So inside of electronics manufacturing I think this is pretty new, but this is a trend that’s starting to play out across a lot of different stacks in the enterprise as a whole.

Gathering a lot of this big data using AI is relatively straightforward, you could say. But turning it into closed loop, actionable instructions takes a well-written algorithm. Why do you think it’s your job to create that algorithm and not, for example, the providers of inspection equipment who are essentially the eyes and ears on the line?

Andrew: Yeah, that’s a good question. Certainly we’re very pro-partnership and we’ve come to the space always looking for, who should we be working with on a given closed loop, a given algorithm? And we don’t assume that that has to be done new, the existing players are not the right one. They’re very sophisticated. I guess I’ll give two answers to the question. So one is, a lot of the analytics we’re doing are not only focused on quality, and so the inspection vendors, the primary pur­pose is to inspect a board, see that there’s a short, there’s some kind of solder paste problem, whatever it is, and be able to correlate that back. And there will be a lot of fascinating and powerful closed loops in the future that may be done completely machine to machine, as you said.

So the inspection vendor, be it a Koh Young, ViTrox or another is seeing a problem on the line, it has a machine to machine integration directly back, and that automatically corrects the place­ment of the head or changes a parameter in the solder paste printer, possibly. And a lot of those need no middleware, noth­ing is necessarily involved there. Now, at the same time there could be a system just like Arch that was mining data about quality, still talking about quality, and seeing before that closed loop was implemented that there in fact was a big problem starting to develop on these five lines but not these lines, or with these recipes or with these products. Or these are the conditions that caused that to happen. And that might be exactly the system that helps motivate where you can put in that investment to build the closed loops in an automated way. So we may be involved at a meta level with quality problems, both in saying where they’re the worst and where they should be fixed, and maybe being able to collaborate with the inspection vendors to build them in the right way. So that’s answer number one, and then answer number two is that there are in fact a lot of closed loops or closed human in the loop problems that are not necessarily best suited for the inspection vendors. And just one example of that could be, again, going back to utiliza­tion, is if the line is being slow.

There’s a lot of high throughput lines, they need to move as fast as possible. And we may be able to analyze that the inspection machine is in fact the bottleneck, it’s slowing it down, or a slow feeder is slowing it down. Different things across the line could be slowing them down, and then the action to be taken, in some cases automated, in some cases the right human coming out and changing a feeder. And that transcends one individ­ual machine type or vendor necessarily being the right company to do that.

So you’re gathering your data across many factory sites. The biggest variable in most of these factories is the solder paste itself, which performs differently from site to site and even from line to line. And that depends on the age of the paste, the variations in the batch, the ambient temperature and humidity. How is it possible to build that into an algorithm?

Tim: Let me answer that in a couple ways. So one part of it is that it requires having the right amount of rich data from enough lines to see the variation. So for example, often an approach we see is, “Let’s take a golden line approach, let’s do this on one line and figure it out.” And you take that approach and you a second one and a third line, and you’re like, “Oh, shoot, my second line actually has different humidity. I didn’t have that as one of my parameters. So the model that I made on a single line didn’t translate.” And I think maybe that’s an impetus for why this has become an important question, or how do you account for factory to factory variation, how do you account for things that you didn’t measure on the first line?

So that actually would not be our approach. So our approach would actually be to not necessarily start with solder paste, but to collect all this data from the printer, from the SPI, from the AOI, from the pick and place, from the oven, collect that data because we’re able to solve other problems with it, such as utilization improvements, such as feeder maintenance, such as nozzle maintenance, and then get that installed in a wide range of factories. And then actually now suddenly you have factories on five continents, factories with a wide range of humidity, where you can truly see which of these problems actually, does it only happen in this particular factory on this particular month? And which problems actually seem to happen across the world in a very similar way? And so we may well find, as you say, that there are certain problems for which you need a humidity sensor on the line or you can’t solve them. But we’ll also find a number of other problems specifically with solder paste that tend to happen regardless of humidity across many factories. And those problems solved on their own have significant ROI for the factory.

And so we can tackle those and then say, “Okay, now we’ve identified, here’s the ones we can tackle across all factories with the one algorithm. Here are the ones that actually need something special.” For this problem, if it’s really important, and again, we can quantify the importance because we have utilization data and quality data to say, “Here is how much money you’re losing because of this particular problem,” then the factory can go back and say, “Let me put it on a humidity sensor and now let me solve that problem.” So that’s how we see it. Basically getting the data in one place for many lines, solving problems today with that data, using that big data set then to identify what are the commonalities across factories that we can identify and fix with the data that we have, and then targeting which specific, if you’ve got this bit of new data, maybe it’s humidity, maybe its temperature, maybe it’s solder paste age, then you can solve a new class of problem and here’s the ROI for that.

Andrew: A funny story to tag onto that, Trevor, is when we do a proof of concepts, so when a customer, either a new customer wants to work with us or an existing one wants us to tackle a new problem, we often say, “Okay, give us two to three different factories and as many of the lines as you can.” And they go, “No, no, I don’t want you to do that much work. Let me make it simpler for you. Let me give you just one line, just solve it for one line.” And we say, “No, that’s actually harder for us.” Because yes, it’s more work to connect all the machines, but as you said, that part is not necessarily the hardest. And if you only have one line and you want me to build a perfect algorithm, there’s really no way unless I’ve already done it.

So in fact, I want 20, 30 lines to be able to see across all of them and make a super set, which is what Tim was saying, of the problems. And if you already have all the lines, hundreds of lines, for example, and if you analyze them and if you see that the data can describe the problems, you already know it’s tractable, you don’t have to then go about scaling it. You’ve already scaled it from the beginning. Whereas if you do look at all the lines and there’s no correlations, then you know you have to keep working on the definition of the problem. And so it’s a different way to approach these things.

So Andrew, how much do you see this tool, the software suite as being a quality control tool for OEMs as opposed to a production control tool for contract manufacturers?

Andrew: Most of the companies like Arch I think have gone to market or wanted to go to market as more quality control for OEM. And the OEMs are spending a lot of money on their designs, they care about that very much. Contract manufacturers, on the other hand, are having to pinch every penny because it’s a low margin business. So for that reason, most companies pick route number one. We actually picked route number two. For me they’re both valuable. So the tools in the future need to address all of the above. And so I see your question more as, what do you focus on first and why? And we as a company actually picked the second one, which is a production assis­tance tool for contract manufacturers. And the reason was, one, we saw other people not doing it, it’s a big need that that people weren’t doing, maybe because it was hard to figure out how to make it a tractable problem.

But what we saw when we looked at the problem was how much untapped data was there. Because contract manufactur­ers have all the machines, they have all of the lines comparatively. And so if you can make partnerships that make sense, that make money for these contract manufac­turers so they want work with you and then you’re actually able to start putting this data set together, that’s the biggest data set. So that’s been our interest. So for us it is primarily a contract manufactur­ing tool to help improve manufacturing efficiency. And it secondarily, in the future, I think, connects with other tools or maybe is extendable to the OEMs.

So what case studies then do you have to demonstrate your thesis?

Andrew: So Arch is working today in some capacity in five of the 10 largest electronics contract manufacturers. We have done in particular our largest skill work with Flex, formerly Flextronics. We’re installed, we’re collecting data from machines in just about every site worldwide. And we’ve been focused in the largest scale in utilization analytics. We’ve talked a lot about that. So one of our case studies is the ability to source real time data from pick and place machines, see exactly how they’re configured, even down to what heads and nozzles, a lot of machines report that. And then being able to compute what’s the current utiliza­tion versus what the theoretical could be. So part of the smart system is counting everything, but the other part is actually calculating a better target automatically. That’s a lot of the things our customers ask us about, do we have to manually input a target or do you know what the target should be? So we know what the target should be.

So first off then the scores that we generate are really interesting and meaningful. And then the second is, as I mentioned before, being able to apply advanced analytics models to figure out, why is the score low? So in terms of specific case studies, we’ve analyzed many lines in many sites where we see as much as 20% to 40% potential improve­ments in utilization capacity. And then in terms of carrying that all the way through, we’ve seen as much as a 25% improvement so far by changing things such as the line balancing. So a pick and place line should generally be limited by the pick and place. It’s the most valuable piece of equipment. All the others are very important, but if your oven or your printer is limiting the throughput of your whole line, you’re leaving money on the table.

So you can analyze all your lines, change them so that your pick and place is truly limiting, and then we have an algorithm that can then analyze the pick and place machine and show you the opportunities to further streamline things inside of it. And so around 20% to 40% we can often see, which are very big numbers when you talk about utiliza­tion improvement for a large contract manufacturer. That’s in the tens or maybe even $100 million of potential long-term impact. And we’ve seen in specific instances as much as a 25% improvement carrying the actions all the way through.

It’s interesting you mentioned Flex there, because their approach to the smart factory environment is they categorize things into, I believe, six pillars of production, covering lots of different areas, including 3D printing and final box build. What have you been able to bring to the areas outside of just the production line, like the box build, for example?

Andrew: So outside of the surface mount production line, we’ve done, one area of recent collaborations we’ve done with some of our customers has been around 3D printing. So in this case just a simple utilization score is also very interesting a lot of times. So in the case of surface mount lines you’re interested in restricting the amount of equipment you have because it’s so expensive. But in other cases you’re interested in just filling up demand onto your machines. I already have the machine, I can’t necessarily decrease its cost, but I would like more business to the right machines. Which of my 10, 100 machines should I source the next job to? So utilization analysis can go the other way around too. And that’s often interesting outside of surface mount, where each of your machines does a unique job and so you can’t get rid of your capacity or sell it off, but you can better use it by sourcing jobs.

Another example is in partnering with the MES. So we have some collaborations, one of our large collaborations where we use retrofit hardware such as this pod box here to tap machines like injection molding and heat stamps and vending machines, stamping presses, excuse me, and are able to get just a real time pulse, the cycle time of the machine, for example, and feed that data into the MES to complete the work order. And this is the kind of stuff MES’s have always done, it’s bread and butter MES, but in this case our ability to go low level into a machine augments it where the MES wasn’t able to talk to some of these legacy machines before. So that’s outside of SMT, it’s a direct support of the MES system, but we built these techniques to do process data mining analytics, but we can find value for them in a number of different areas.

Tim: One of the things that distin­guishes our approach and methodology from similar past approaches to getting data out of machines is our laser focus on non-disruptive to the factory. So one of the reasons why factories start with the single line approach is because they assume that each line is going to be a giant pile of pain and work to get data out of those machines and it’s going to cause factory downtime. So let’s do the smallest possible, limit it. Whereas instead, actually what we have built, our system of techniques that can get data from literally hundreds of machines across many fac­tories with zero factory downtime. And that’s been key to our approach, because that’s how we can say to the factory, “Give us two sites or give us three sites, and it’s the same work for us and it’s no down­time for you,” versus, I need to go to the factory, I need to go to the line, I need to personally inspect every machine, I need to install some box on the line. You need to open up a bunch of IT ports for me, it’s going to be a pain.”

So that’s just been part of what I think is part of one of the challenges of working with contract manufacturers, is being able to find ways to be non-disruptive, but that need has forced us to build this very inno­vative technology suite around, how do we get this data? Not just that we can get it, we can get it without disrupting the factory in any way. And that’s allowed us to scale the number of machines we connected so much faster than is typically seen.

We all know that the materials behave differently across different factory sites, but is equipment pretty much the same from one site to the next? Or is there much variation depending on the location? I don’t know.

Tim: Yeah, it’s an interesting question. I guess I would answer it in a couple different ways. So I would say the first way is that there is a lot of commonality in the sense that there’s only a couple different ways to run an SMT line. A couple of standard ways, how many conveyors you’re going to have, are you going to do real production, mixed pro­duction, single run? And we find that those archetypal patterns and how they manifest in terms of signatures seen on utilization on various machines natively generate is fairly universal across all these factors and what allows us to do these cross factory analyses so easily. What definitely is not standardized and we have had to invest a lot of time and effort and understanding is all of the different machine versions.

So you have a large contract manu­facturer with an installed base across 30 sites, you’re not buying new machines in all your sites. So where we’ve had to invest a lot of work and build up a lot of technology is, how do I do the same analysis across 30 years of Fuji machines or 25 years of Keysight machines? And you can do it and the data is all there, but it takes a decent bit of, one, partnership with the vendors to really understand what’s new and what’s changed and what the commonalities are. That’s why the machine vendors and those solution part­ners are key partners for us to have that open relationship.

But then also just the ability to dig into the data, look at it and say, “Okay, I can get it like this, like this, and like this, and they all point me to the same place.” But that works across all the different factories, and now I can finally do the comparison, answer the question. Do all machines really work the same across the world? And if they don’t, what is the difference? Is it really humidity or is it some bit about how they’re being operated? Is there a best practice you could translate, or what is it? And we can’t probably discuss specifics, but there’s a lot of very inter­esting things that we find there.

So I want to thank you both for coming in today, it’s a fascinating software suite and a great way of being able to extract value out of the production line. So Andrew and Tim, thank you for joining us.

0

Start typing and press Enter to search