09:13:48 Due to the confession, which is about 10. 09:13:48 seconds. 09:13:50 Then we have the county mod mod. In this case, we only use the signals from the disk creaming data, and the country to can grow up to about 20 mega count per second, 09:14:12 third mode is a tie. 09:14:15 timing mode. In this case, the treatment goats are you used to acquire the timestamps of the hits with a resolute loaded Lucian of five by 500. 09:14:32 seconds. 09:14:33 And we can acquire also the TMT, which is the time over threshold. And this gives also a quite good esteem is to mention of the parts eight. 09:14:54 We are going to make. 09:14:55 Mix it more day in which we have at the same time, parts eight and also timestamps. 09:15:03 There is a software that we call john was the name is bad, because the software has to, let's say two faces, because it is made of two parts. There is a C program, which run around, let's say in a console Model Model and that there is a gooey and made 09:15:37 with the Python, you, you can use the GUI. If you want to or you can also use the sequel code alone, in case you want to write to your own. 09:15:53 The 09:15:53 software. 09:15:54 So the program is open source of course and it allows you to to manage the readout and also the deletion of the CPM detectors. 09:16:17 So these are some examples of the acquisition, that we did with the sport. 09:16:34 In the first case here, we have it my see from Martin Metrix from alma mater, and we use an LED driver to say to send the delight to the sensor. So the position shows you the spot of the light on the sensor and on the writer, you can see the parts eight 09:17:02 back to them with a vegetable. Good. 09:17:12 These this third case. So we this software allows you to make a skeleton of the the threshold, and the data by counting the number of a heat wave, we can build this staircase on all the channels of the board. 09:17:37 And again, you can see here that there is a very nice and clear. 09:17:55 Red 09:17:48 Lucian of the single photons. 09:17:55 These, these 09:18:31 disease, and that is not to the multiplex readout. 09:18:49 This is an issue with the cosmic 09:18:48 rays. 09:18:50 We did this for a long, long time. And, yo, yo, yo can see on there right right right today. 09:19:12 last slide here for this is them. 09:19:23 Again, the cosmic ray do show with God. With, which is about the blessing of the pn the PHA. And then, we will lead to some tests with the cable it's to see how much they there, then the lag to optic cable up affect the, the noise. 09:19:51 And, as you can see here that is not too too much difference between cable it's up to three meters. 09:20:07 Okay, so this is the constant, constant trailer board. In this case, okay this model is still on, on the road, their order is not done yet, but it will be available, very soon. 09:20:30 And all on the front of we have a little LinkedIn links for the readout of the up to 128 fair to you, unit in the backer you have the interface to the computer, which can be one or 10 Gigabit Ethernet, or USB three. 09:20:59 Okay, so now let's switch to this second part, digital wizards, we have, we have a new light line of the digit ties it, and this word here is the first one. 09:21:15 It's the 2714. 09:21:19 Okay, it's a complete completely new hot, hot hardware, with the 09:21:31 fast, new faster internet faces and big memories and improve improve the FPGA and the armor for open FPGA solution. 09:21:49 So this is the block diagram of the board, in which we have the inputs indices. And then there is one single big FPGA with the DDL for memories. 09:22:06 So these are this specific location of the first board. It's a totally, it is a 64 channel inputs. 16 beats 125 cent per per second 09:22:36 row roadmap but so the defense bodies they be 2740 with the 64 inputs. There is a is a board, which is going to be ready in a few months, this is the be 2745, which, which is about the same of the 2740 but in this case we have we have butter, butter, 09:23:09 gain the input. And then the new boards that we we going to start a stem. 09:23:19 v. 09:23:19 20 730. In this case, it's for 32 inputs and up to a fiber. Fiber under the mega sample per second. 09:23:28 They be 27 5051, which is 16 inputs, up to one gig a sample per second, and the read 2724, which is a 32 inputs, 125 mega sample per second, with some. 09:23:55 Let's bounce it 09:23:54 from tender at the input. 09:23:59 The software for this new digital deserts are with dumper to, which is like a kinda, let's say you're the CEO of scope or compass, which is made for the readout of the dg dies or with the digital parts processing so for PHA or PSD or QOQDC and song. 09:24:33 Okay, that's it. 09:24:39 Carlo. 09:24:41 Yes. 09:24:56 Yeah, Chris you want to answer. 09:25:00 Sorry. 09:25:04 Chris. 09:25:06 Chris Crawford Yeah. Sorry, dude. 09:25:21 Just a procedural Chris gray most Yes Who do you not see the hand raised. I do now, I had the wrong screen up. Okay, fine. 09:25:25 Go ahead, Chris. 09:25:25 Kelly, that's a great presentation, I was wondering, on the new 22,000 series digitize there's. 09:25:31 So it's possible to use custom triggers and filters in there. Is it also possible to custom organize the buffering and reading out. 09:25:41 Let's say you wanted to put a histogram right inside the firmware for instance. 09:25:47 Well, as I told you, the. 09:25:52 This board has been designed to have an open FPGA offer open FPGA means that the, the user. The user can put his own code in the framework of the board. 09:26:09 Of course, there are some constraints for the trigger, for example, that is kind of 09:26:21 block where they use. They use a computer, his code and generate the trigger, according to. 09:26:35 They are yours, of the border, or even 09:26:42 making data from the ADC inputs, so the user can cook combines IO, and data stream to generate the trade 09:26:55 trigger a data, a game can be changed by the user. So they're raping readout to goes through five force, and then through their DVR for memories. 09:27:15 The memories, cannot be controlled by the user. These are controlled by our part of the femur, but then the user. The user can change the content of the event. 09:27:36 Gotta 09:27:35 pockets. So in case you want to build the Instagrams in the world that you, you can do that. And the only point is that you have to push these into the data stream. 09:27:56 According to the constraints that we have. 09:28:05 Okay. Thanks. 09:28:10 See any other hands raised. 09:28:14 I had a quick question, Carlo, and a small setup for the furs you show the TD links as they do look, they look daisy chained. So my question is for Is there a limitation, because that that link is going to aggregate, you know it aggregates the number 09:28:33 of cards you have together. So at some point I've imagined it might might saturate is or is that an aggregated bandwidth number right it was a 4.4 point something gigabits. 09:28:48 Oh, yes. okay. 09:28:50 The link is four point 25 gigabits then number of other facts that you can desert chain in in a single link is 16, so you can put up to 16 boards in a single Lee link, of course, if you do this. 09:29:17 If you do this, the bandwidth will be shared between the border boards, but not normally these kind of boards. 09:29:37 Doesn't need too much bandwidth, because there are no way from Stu readout only parts eight and timestamps so the sides of the events is quite small. And you don't need too much bandwidth to read them out. 09:29:53 Okay, thank you. 09:29:56 Any other questions. 09:30:00 If not, thanks to Speaker again thank you Carlo. Good to hear from you. 09:30:11 And moving right to 930 talk is dr tick I'm going to miss pronounce it but it's tech sector Gucci. Yes, exactly. yeah. 09:30:17 Sam ov five an S Phoenix DPC Welcome to the workshop. 09:30:22 Okay, thank you so much with a mutation. Let me share your slides. 09:30:29 Can you see it. 09:30:30 Yes. 09:30:32 Okay. 09:30:33 Go to the full screen. 09:30:41 Nice. Okay. 09:30:42 Yeah. 09:30:44 Okay. So again, thank you for inviting me to this very nice workshop. I joined the last, maybe one or two, but never presented here, so I'm very happy to present something here. 09:30:58 So I'm going to talk about the latest status of the sample defied in this Phoenix TPC actually just a sample of the five w for the TPC us. 09:31:23 Okay, so just give you some idea what's the Phoenix TPC is, I guess, marking already presented some detail. I just keep you. The deep, some key parameters that we have to keep in your mind. 09:31:30 In order to design the electronics. 09:31:33 So this is the primary primary device for the tracking and momentum measurement in s Phoenix, And our producer volume is a in radio reaction it's a 22 the 80 centimeters and it's full lots more coverage this press my small point one, so it's it's it's 09:31:56 very compact. The TPC so meaning that if you want to read many channels you have to really stuff, a lot of a six or whatever in a small space. 09:32:06 So this is kind of challenge, and the event rate is, is going to be from 15 to 100 kilohertz. Although the doc can have the limitation of the event rates. 09:32:20 The equation rates go up 200 recently some discussion about this. 09:32:28 Now, the goal is to achieve the 100 250 micron position resolutions in the foot producer volumes. 09:32:35 So the hardware wise, what we did is the divide the radio direction into the three week was our, our to our threes and from mostly not to the authors that we stuffed The, the electronics. 09:32:49 The six points I was in tow ports to the, to the outers. So this is showing the apart brains and this is the, the other part it's, it's very this. 09:33:00 The witnesses, a small said to me matters. 09:33:05 So, this is the overview of the readout chain of the TPC. 09:33:12 Again, we have the TPC prints out inputs, and we stuffed the electronics screw up close as possible and it's actually a proc own to the apartment. So here, I showed the pictures, it's a latest mock up of the TPC one sectors, it's, we have the 09:33:36 top of this one sectors for each side so we have 24 for both sides, and we have stuffed the top in most adults and eight in the middle, and the 16 most enough. 09:33:49 It's very dense and the, we have to call out these electronics, as well so it's it's pretty dense from the signal from this Electronics is already digitized and then it's sent to the so called the DM and damages the hardware wise is to attract speaks 09:34:09 cards with the, the EDC it's basically the PC servers. 09:34:14 And then, the data is processed in, to some degree, and it's taught to the data storage of the tapes of something. 09:34:24 Roughly speaking, the data rate is around hundred 10 megabits per Google me and bias events from the whole TPC. 09:34:33 So it's a, it's not, you should multiply it by the number of events or equation rates to get the whole that data rates. 09:34:44 The, the other key number is the total number of fees are 624, again, 12, eight, six, and times of 24 gives the this number. 09:34:57 So, directly to the front end electronics. 09:35:02 So this is the design. 09:35:19 The broke diagram of the electronics. 09:35:10 Since the, the event rates, we expect this can be as high as 100 kilohertz at the time of the design. 09:35:16 We think that we need the continuous without notes. We just, we digitize the older data from the pot brains and let the FPGA process. The, the media traffic controls and then sent to the, the bucket, electronics, and we decided to use the eight some projects 09:35:38 for fee. 09:35:40 In order to accommodate that that's 10 support grains, so that's can one if he can basically accommodate the 256 channels, six to 56 inputs. 09:35:53 And I'm going to talk about it a little bit more detail later, but we need to use the shipping times of the at nanoseconds. This is once tested but it's dropped us the development of the samples progressed progressed 09:36:15 and and right now the Gestapo defied has these options in that I'm going to talk a little bit more. 09:36:23 Um, and then FPGA receives the data from the samples and cents to the optical name. It's also process the o'clock and so control data set from the done. 09:36:32 So this is the optical nucleus the really, really the links communication only the communication link between the back end and the sport. 09:36:44 Now Jetta is implemented over the optical link now that I'm going to show you the next slides, and we have the full pairs of the optical fibers from from fee to the dumb 09:37:06 per unit. Our ADC proxies the either five or 10 or 20 megahertz. And these kind of the numbers came from the fact that we based on the rip been causing clock, and the power consumptions right now that we measures to be 15 watts per board. 09:37:19 So this is the pictures, it's the latest we call it's a pre production of keyboards, and it's been tested, and so far so good. and once we confirm the other functions. 09:37:31 Okay, this is going to be the final boys. 09:37:36 So a little bit more about the FPGA to the bucket towards. 09:37:43 This is what I was talking about. So we have the of the so called the curious if images, it's a quote the DX RX and we use the two TX for the, the data sending, and one for the assistant clocks in some data in. 09:38:05 Now we have put the so called the DC Arizona series images, this is a ti it's very convenient IC, and with this we can see here I such a toxic signals 09:38:22 on the fee, so you can send the signal through the optical links and let the car is so DC Eliza to control the FPGA. 09:38:33 It means that we can have a program FPGA using the optical fibers. 09:38:39 Also, can look at what's happening in the FPGA under the high very harsh or audition environment. So basically, we can do the SU medications or whatever's, some of the sub modules. 09:38:54 I'm showing this is a jackpot. This is another bucket of the jetpacks, and the, this is a dump the feed sports. I think you're already familiar with it. 09:39:06 Okay. So finally, we come to the sample chips. 09:39:11 I'm talking about the before, this is pre the five developments. 09:39:18 the basically the overall scheme is same. 09:39:22 The sample is the 32 times CSA plus shapers and f8 Dc, and the DSP, or maybe DSP is also part of the, the 32 divisions. 09:39:39 And 09:39:39 the CSA and shapers, the option of the Copa there's a combination of the gains in the shipping costs years for this before I took from the data sheets, it's certainly me vote for femto currents and the 20 me vote for film token for the hundred 16 ounce 09:39:59 chicken shapings, and a for me boats. Perfect the Quran for 300 nanoseconds. 09:40:04 The formula votes options is basically for the neon chambers for the release, and the 30 and 20. This is for the TPC for the release. It's a pretty much specialist for the release and still that's very useful for the Hispanic so that's why we started 09:40:20 out from this developing the electronics along around this, some projects. 09:40:26 The powers. 09:40:28 Rafi saying it's a one word project. So if you have to eat chips, meaning that's eight work already. And with FPGA Rafi it's a fourth, so it's only the gift of 15 watts of the pop assumptions. 09:40:42 So DSP functions that includes the data buffering and pocketing adding the hitters etc. 09:40:49 And as a three baseline corrections restoration schemes and also the data reductions teams, such as operations and also the cluster sums after this operation surprise basically you some of the couple of the samples, defined. 09:41:09 And it's gives the smaller data size but you lose the, the information of in the time the action time and the actions there So, also the so called Hoffman compressions. 09:41:20 As far as I know, it's, it's never used a new sure it's tested. 09:41:24 But I, in our case it's, it's, we're not planning to use either the operation modes includes the triggers in the GRS so called continuous video modes. 09:41:37 And also there's options that you can read and see all the ADC directly through the meetings. This is called object a DCC rotations. 09:41:45 The output was the chips has the 11 links. 09:41:52 And that means, the one eating can afford. After the 320 min of DPS. So, meaning that you have the output of the 3.2 Giga bps project. 09:42:05 And this means, what this means is that you can get the data at the 10 minute I heard something without sales operations of pride. This is possible, the data bandwidth is worse. 09:42:21 If you're interested in the operation mode or in more detail. 09:42:25 These are the literature's available publicly, if you just Google was the example before you can get these touches the latter is a nice paper this is publishing in March. 09:42:41 And this is also. 09:42:43 I will say it's a very nice reference. If you would like to design electronics around sample. 09:42:51 Okay, so this is the Obama history. 09:42:55 As far as I understands the, the so called MP one rW one was the, the initial development to test the sample components. And at that time, that those are the two, three shipping time options. 09:43:10 And so called Mpw to was the integrated sample chips and at that time, all the data nanoseconds was dropped. 09:43:19 And the third versions was supposed to be the production versions. And there's some issues with the DSP in the second versions and that was fixed, and also the memoria up some memory part was redesigned to mitigate the radiation issues, and it was submitted 09:43:37 for the test thing. And then the issue of the payoff came after this. And there was another version of the samples to shorten the decay time of the SEC by changing the RF, this is my understanding. 09:43:57 So, right now, the sample Wi Fi use the same scheme with the fall, and this is the post injections at the rate of the Sony not in the 40 nanometer case, and even the surgery nano. 09:44:19 You keep injecting the pulse and it doesn't shift the baseline too much so this is very good for the high rates of measurement. 09:44:25 The top of the five. 09:44:27 This is for the SPX TPC, starting from the April. 09:44:36 2018, and our for our desire was to revive the at nanoseconds treatment times, in place of the 300. So now we have the ad and hundred 60. And there's also modification of the FCC Sal ch have a better year naughty. 09:44:54 Ah, ok. Now, the development of the software itself. 09:44:59 It started again. 09:45:02 Being in April, 2018. 09:45:05 We have the cross collaboration with the University of San Paolo, 09:45:12 and the this shipping turn wants tested. But in order to get integrated to the sample before the simulation study has to be revisited so I started out full of simulations, that's why. 09:45:31 And the also what's, as I mentioned, the, the, the SEO agency switching. 09:45:38 It's also revised the things is, if you go to the ad nanoseconds. You should have the enough time sizes to reproduce the pause, shape, meaning that we have to go to the 20 megahertz most of the time, instead of the time in our health. 09:45:57 And since the Elise, use the either pod manor house or 10 megahertz. 09:46:02 It was not the program. 09:46:04 but the for the 20 megahertz operations, or the switching scheme for the ACR. 09:46:10 That's needs to be improved. 09:46:13 There's some issues that makes the annoys high gear. And this modification was implemented and we'll be fine. 09:46:23 With these modifications. 09:46:43 The we produce the another module project wafer. And this is the designs, and he has the, the analog part and this is the, the, the ADC parts, and as the old ones and nuance and some modifications. 09:46:43 And this is the couple of the unknown part at this point. So we have the couple of chips. 09:46:58 On the same day, and produce and cut it out in the test it's at the University of some powers. So that's happened actually to the order 2019 is a two years ago still stays not that old. 09:47:07 And Fuji productions also started. 09:47:12 The almost a half year after this testing so it's a very rapid progress, but it seemed, it was very successful, that the oldest schedules, actually went as we expect some test results from the Mpw chips. 09:47:31 So, this part is the unknown part in some power mates, the participants, using for this chips and get a nice shaping of shaping policy at nine seconds. 09:47:44 This is shown here. 09:47:46 measures the linearity is between the input charge in the P competence. It's very nice. 09:47:53 The it's, it shows the very nice features. 09:47:58 It's, it's expected. The park consumptions was the 6 million watts per channels, meaning the total power consumption is the almost seamless the year before, and noise measured at the time was a 500 electrons and 600 and it comes with the detectors capacitors. 09:48:20 There's the another test result for the ADC and unknown plus ADC component. So again, The improving the sausage. We expect the have the end of the better. 09:48:34 And actually it is. 09:48:49 This is the mob as a function of the operations of the output. And before. It gives the 8.2 at the Muslim and lowest eight, 9.2, and you see the end, Ian will be improved it at the highest amplitude is 1.2 8.7 for the five. 09:49:01 So this is, as expected. 09:49:04 And with this ADC and analog circuits. The posturing words actually reproduce, using the data from the FCC, so this is the partnership and peaking time, this is as expected. 09:49:25 You see the number is lower than 89 seconds, and this is actually the features it's not a surprise. 09:49:28 So if you want to have the faster, rice time. 09:49:32 You know, it's, it's, it's 09:49:36 okay. Then we went on to the production of the sample the five chips, which do the successful components developments. Finally we have to integrate all the components to into the ones in the multiply by the 32. 09:49:55 So this is a big tie in this is the designs and just for the CSA component. This is so dense and layout we're rejects in every single skirts and they submitted to the way for production companies, and it was produced the wafers us with the so called mll 09:50:18 mask set. 09:50:21 And we run the two runs in one is the engineering runs its produced a 21 wafers, and the another production runs the 25 wafers. 09:50:32 The, the things that we need 5000 chips, absolutely needed. 09:50:38 But, taking into account of the yields, we decided to run. 09:50:43 Not only engineering, but also the production runs, so that gives the two key chips in the end. 09:50:50 The production history wise, we process the first four wafers first. 09:50:57 And we check the weather it's okay, then the rest 17, it's were held at a corporate level and must be confirmed it's okay that we asked the company to put process that 70 wafers to the packaging, and also run the another 25 for the production's interest 09:51:16 in the costs, I cannot right here. 09:51:20 explicitly but you can ask offline to. 09:51:26 Now, this is a test result from the production chips. Again, it's just a detailed test result from the input the samples. 09:51:34 This is a silly me board for authentic rooms with 89 seconds shapings. So you see some very nice realities. Another game set the 20 Meebo but femto Courant's, it's also nice and peaking times. 09:51:50 It's with the more chips, you see the around the 68 and 60 to 90 seconds. 09:51:59 So this is the noise, and as a function of detectors. 09:52:04 We said he ago of the noise lesson, the thousands electrons at 89 seconds, and the roughly speaking, that's it needs, very marginal. 09:52:19 But we have to notice that there's some possible noise from the test sports for the subjects. 09:52:26 We had to the that we have to do the test are very aggressively and and very rapidly. 09:52:35 So, there may be some press to improve but we didn't have time to do that. But we are they're happy to see these numbers already listened to it. 09:52:47 And we also got the results from the university. 09:53:01 From the first of four wafers. And that's also gave us the more confidence that the noise it's kind of good by the way this is the facility that we eventually asking to test the 12 k chips. 09:53:15 And this is the, the robots. 09:53:18 And this is completely automatic. If you're interested in you can create this a YouTube videos, and it's very cool. 09:53:26 The every seconds, maybe one chip. It's like a two seconds or five seconds at most, including the pricing and the replacing, and it is the older functions that produced all the mean values are a mess and the gains and rise times, and also the DSP functions 09:53:46 through the jetpack. 09:53:48 And what we are going to do, is the hundred 60 and 89 seconds to pass testing. It's the same testing items as for the TPC and noon chambers. 09:54:00 So far 269 second testing was done for the old 1212 k chips, and so far the yield is around 70%. The 80 nanoseconds is testing is progress. 09:54:13 It takes a little bit more because of the corporate situations. 09:54:18 But we are getting the chips, very soon. 09:54:22 So I guess the. This is the law. The third to the last slides. 09:54:28 This is actually brand new results from our love. 09:54:32 This process made by the German culture ski, the iOS engineer instrumentation engineers, and what's showing here is the first one, this is the ADC positions as a function of the time slices. 09:54:49 It's a 59 seconds tix, so it's a 20 megahertz operations. 09:55:07 It's 89 seconds shaping so talk me boards are femto currents are no detectable the capacitance to the inputs, it's shown. 09:55:00 This is by the way over a of the 20 to 56 channels outputs. 09:55:05 And if you protest against the, the channels in the, the fee or you see the very stable HTC protest of positions, although the statistics lousy, and you see that it's between the 1680, if you measure the RMS. 09:55:25 So 1.2 at you. 09:55:27 And this is corresponding to on several hundred electrons. So, it meets the requirements. 09:55:35 So it's it's very nice. 09:55:39 Um, the last ones, I'd like to show is the radiation hotness test results. 09:55:47 Indeed, we didn't do the component by component irrigation test, just give you an idea that radiation hardness required for the electronics, is the hundred euros. 09:56:01 Oh estimate is a 25 curate tip for the five years of most, but the review committee requested us to test that up to the hundred kilohertz. 09:56:13 And we went to the, what's a gamma sauce, at Brookhaven implementations that gives you the accurate hours. If you praise that's like 10 or 20 centimeter from the source. 09:56:28 And this is a result of the most of the parts. The simple the simple that the parcel of fi. 09:56:35 and the most of the past past artists, including the samples on the field parts were repressed with the alternate paths later. 09:56:44 For the samples. 09:56:46 The, it was turned on and the data, which normally after this era ideations. 09:56:52 We haven't tested the linearity or something but that's the purpose of our stations already gives us the confidence that's the samples. Also good up 200 curate. 09:57:05 We also did the magnetic field test, which after the 1.5 Tesla, but so we saw no effect to the components including the optical transceivers, and that's kind of consistent with our expectations. 09:57:21 Okay. 09:57:23 The summary. 09:57:31 Today in the futures. So we think that some of the fire department was successful. And we, we had the at home keeps already. And so far the performance meets our expectations are more detailed tests with the software is coming in, including the linearity 09:57:43 check. It's a basic things but also the saturation characteristics. If you have to large signals maybe it's would shift the baseline, this is something that we have to check and spark protections and some other stress test that we are going to perform 09:58:00 implementing the SU mitigation for the feet. This is mainly for the FPGA, and from our point of view, the SU for the sample. It's not very critical for the operations. 09:58:14 You should be less frequent and FPGA, and it doesn't stop the data sending, and we can power reset the samples if this happens through the FPGA, so we can recover it in a very short time. 09:58:30 Now for the future of the samples, the wafer company actually decided to finish producing wafers from the Emory masks. And our last order for the samples, actually went out the month level, and some interested parties as far as I understand, negotiated 09:58:49 with the company at the time in how to proceed. And after that, the additional sample chips can still be made, but we need to make a new so called the form asks set. 09:59:02 And that's the only downside of this is that you have to pay a little bit expensive assets, compared to Mmm, this is around the place. 09:59:11 So this is means that the some projects are still available. 09:59:15 It's very interesting and nice searches. 09:59:19 So that's all I can say. And thank you for listening. 09:59:25 Thank you very much. That was excellent talk I mean, 09:59:31 turn my video on real quick I had a question I was gonna I didn't see any other hands up I see Chris Crawford hand is still up but maybe he doesn't maybe that's from the last session. 09:59:41 So you answered my question and in your summary, the third bullet about the ICU mitigation, and why you were doing the radiation tests are dosing, did you were you running background scrubbing or did you see any se us from the FPGA, like you said you're 10:00:01 not concerned too much with the sample. 10:00:04 But that our texts it's an Arctic seven right. Yes. 10:00:09 So, you know, basically, the covert gamma source. 10:00:15 As far as I can tell, that's no Su. This is pretty low. Yeah, right. It's too low. Yeah, and I guess we get to do the proton irrigation test at some point, but we have the estimate how much ACU, we would have. 10:00:38 How much ICU, we would have. And purports our estimates are 200 kilohertz gogo is like once in thousand seconds. 10:00:46 But, you know, we have the 624 cards. 10:00:51 So, you know, eventually, as a whole TPC, you have, the more Su. Now, the, the mitigation plan for the FPGA is that, you know, as you imagine the scrubbing of the FPGA memories. 10:01:05 That's why we implemented it gets the agenda over the optical fibers for the production's. 10:01:13 There was a prompt to duplicate duplicate the order FPGA logic and in triplicate, the order physiologic in the, in the courts and, you know, just to the TMR. 10:01:27 As far as the also implementing some monitoring logic. 10:01:32 But we, we found that we have to implement other features in this, the fee FPGA. That is some of the some that pause, which is to use for the correct some of the distortion of the TPC field. 10:01:47 So I we found the, the easiest solution is the scrubbing the FPGA so the jetpack is Thank you. 10:01:57 I see a few hands up so young. 10:02:01 Yes, just two questions or two related questions. Do you think it's possible to reduce the shipping time even further in a in another iteration. 10:02:12 Well, um, I can tell the through the during the development face some power also tried to simulate 4050 nanoseconds. I thought, instead of the 89 seconds. 10:02:31 For me 50 or some a little bit higher than 50. 10:02:35 And it seems it worked. 10:02:36 So, absolutely. 59 seconds can go, but above 50 nanoseconds. 10:02:42 The this. The, 10:02:47 the CSA. I mean, CSA helps the opium in it, and the opium needs the face compensation circuit. And that's what has to be revived. 10:02:59 Basically that's the right time it's very high, meaning that you know you have to have the higher the marketing, the FET or the transistor in it. 10:03:12 So you have to design, quite a, you have to redesign these parts. So it's a very major, major change. So we decided not to go faster than 59 seconds, if any, and eventually that at that time the simulation tells at nanoseconds. 10:03:30 Actually, it's effectively the 60 nanoseconds. 10:03:33 It's enough for our use. 10:03:36 And if, if they want what want to do to push this to 50. What do you think, Justin rough estimate how long would it take from the idea to do it, to have the chip in your hands. 10:03:51 So, if you follow our timelines. 10:03:55 We started the simulation 2018, March, and we already got the, The design completed by summer. And then we send it out to the wafer company which makes the component productions and tested and fortunate productions, Rafi Fuji production started one and 10:04:25 a half year at most. 10:04:25 But the problem is that the much a project wafers. 10:04:31 You know to test the components, it doesn't run too much already. 10:04:36 It doesn't run too much and it didn't run so often at that time it's, like, three times a year, and you have to get on that board to get design tested. 10:04:53 Yeah, I'm not sure the companies to do the disarm the MTM project. So that's has to be consulted with your company, yes. 10:04:57 Okay, thank you. 10:04:59 I've got some questions queuing up but I think it's worth, extending a little bit so Sergei I believe in line, go ahead. 10:05:07 Yes, definitely good question. I know that expecting some books for evolution and you can get them with one of them. 10:05:19 Sorry, could you. 10:05:27 You're supposed to send the one word for revelation to Jefferson love. I yes yes what the schedule for that. 10:05:33 But what's a skill. 10:05:34 Well, when that will happen. 10:05:37 Oh, the, I mean, when you asked when it happened. Yeah. 10:05:48 I don't remember when we send it. 10:05:54 Last year, and I was told that it will come soon. Right, so do know something about it. Sorry for targeting jacking but I think surveys asking when will get the Wi Fi prototype and send us a ob for one just for mechanical reason so we can map because 10:06:22 but we're going to get us some. A couple of copies of the five credit card. Yes, yes. Actually, yes. Yes. Right now, are we are using the pre production prototype for the testing on the productions. 10:06:31 We will start, I think, in, in one or two months, and then we get the printing of the boards. And that's pre production boards can can go to your place, maybe a couple. 10:06:52 Okay. 10:06:53 Sorry. 10:06:54 Yeah. 10:07:05 Yes. Can you guess the game on the semi to you. Yes. 10:07:09 And so what's the maximum you can go. 10:07:13 Oh, the. Okay, so we have 20 more than 30 me both finger quotes are perfect the girls, and the dynamic range, it's two boards to 2.2 volts. 10:07:27 So, if you take, if you take the 20 votes, you can accept after the hundred ventricles. The 30 votes. It's a 67 or 68. 10:07:45 Yeah, can you go up to like a 15 minute was for fun, or even higher. 10:07:48 Unfortunately, this versions we drove the 300 nanoseconds shipping options. 10:07:54 It's, it's automatic control of this for me vote for perfect options to. 10:08:00 So you cannot go higher, unfortunately. Okay, thank you. 10:08:06 Thank you, William and Fernando you get the last question and then we'll move on. 10:08:10 Yes, I got a question on your flight 13 You don't have to go through that but you mentioned that is at the, the yield is 70%, at 160 nanoseconds picking. 10:08:25 What's, what's the main reason for such a lower yield. And what do you kind of preview to be the, what are the 89 seconds. Oh, so the right now. 10:08:41 So as far as I remember that for the hot Southern Southern chips, the, the ones past 160 almost parts of pasta, eating on the second test so roughly same yields. 10:08:56 Now, in the Elise case, the loans University also does the test for galleys, and delegates. 10:09:05 The 80% yields for the before. And of course this mean this 80% effectively means that you have the certain threshold accepting the chips, including the position of the protesters and some voltage of the chips. 10:09:19 So that's what's the outcome range that they cut and eventually come to the 18%. 10:09:26 You know, it depends on the purpose, how you use the edges maybe you can widen the acceptance range let's come to the 85 or 90. But we are sitting in a safe side. 10:09:39 So, we are still fine with a 70% yields. 10:09:49 Maybe some, some chips can be rescued. 10:09:47 I don't know, but this is the think. 10:09:52 Thank you. Yeah, I was just wondering if it was due to a manufacturing issue, primarily from the SMC or ah ok so what one things I can tell, and this is learned from the some power groups from die wise, and if it is functional or not, it's almost 90% 10:10:09 of the die, but if you the package and, you know, if you if you test the, the actual pedestal and taking the good chips it's reduced another 10%. So 90% of the die, so to speak. 10:10:26 Okay, great. Well thank you do. 10:10:31 Can I move on to the next talk. Thank you very much, that was very interesting talk. 10:10:39 If any questions you know additional questions you can, you know, free email, and I can, I can try to repair the questions. 10:10:48 Okay, so the next talk is dr yasir Corliss Morales, about the micro vertex tracker in Phoenix. 10:11:00 Okay, share 10:11:06 that out. 10:11:08 I'm trying to share but mine zoom is kind of let me, let me see. Maybe I can for your. Yeah, can you for stopping. 10:11:21 Yeah. 10:11:21 Can you see this like, we still see tacos. 10:11:37 We still see tacos, 10:11:40 to kind of just look out of zoom for a moment and get back in then. 10:11:51 Okay, so I look often look in, in Sims. 10:11:58 I don't know how to stop the screen. 10:12:04 Can you see owner rasa session the zoom session so he can override Yeah, Australia. 10:12:12 Thank you. 10:12:14 Thank you. Yeah, Sure. 10:12:15 Okay. Do you see my slide now. Yes. 10:12:19 Okay, thank you. First, thank you so much for the invitation. 10:12:23 I we first talked about this to me read out for the micro better date of this finishes payment. And then we use high life years lied about without the INDF Orlando for the Yossi detector. 10:12:39 So firstly, let me introduce the Finnish experiment with this a metal detector relativity here your input, good idea of being in these are decided to prove their inner workings of the curriculum plus mile range of momentum, unless escape complimentary 10:13:07 barely see the CO special full installation of the detector will be finished by the end of the 2022 and then which you started taking data. Beginning of 2023 10:13:18 that the microbiota detector here, we be to provide a high resolution and betters into this power of the tacky system for this finishes payment going to be the most without you can't find a MBTX. 10:13:30 Going to India most without you can't find a MBTX, the micro better detector. The Seagull is deeper intimacy attacker, and the time producers chambre these tracking system we we high pressure moment to them, despite better measurement, they will pay it 10:13:44 on Monday Monday to freedom for one point for Tesla. 10:13:47 So moving to the MBTX here in the center, just show you a session view of the Nvidia detector. This is ecosystem. This the technical seat of a three layer of silicon detector basically owned by sensor with was the map, sense of developing for the overlay 10:14:08 of the alleys ideas on the layers cemented in a safe in total we have for for a station for the food MBT x is just a for half a night cheaper of 50 micro scenic growing to a festival free to secret for power and see no distribution, and to the couple 10:14:29 mechanical support. 10:14:33 Here in the bottle I should you use a picture of the MVP is impossible. They are almost identical to the Alice IDs in abundance Dave is stated that we use a longer power station to defeated this fantasy embeddable. 10:14:52 So addition otter Fisher we mean to the first layer we close to the international point by 2.5 centimeter and the data we cover sewed up at great young less than 1.5 1.1. 10:15:10 Several happy to radio within the setup after less than a centimeter, so. 10:15:16 So, let's move forward to show you talk about the death by sensor which is the, the center part of the NVTX. 10:15:26 And here I show you yourself a schematic of the does better session of this up by shipper in the pipe is a produce it in the tower just the 100 meter see most email process. 10:15:44 This process allowed to impact and work coalition editor edited on top of a dp p word agent in high sec VTS because it because to layer on the piece of stuff. 10:16:01 These allow the possibility to implement a CMOS logic inside the PISA without deteriorating the shark coalition efficiency. 10:16:12 You know the Sean decertified yes process also allows the application of negative bias to the state that increases the pressure zone. In this way, improve or both the chart Qureshi on the signal noise ratio. 10:16:31 Single sheeple it measure 50 pounds, say, millimeters square and equal costing a monthly of 2927 micrometres square is 521,024 columns. 10:16:49 A other characteristic for this is cheaper, is a very low power concession we are talking about a 49 of water per piece. 10:17:01 and they have an integration to time for less than 20 microsecond must be marketed our speed over 1.2 gigabit per second. 10:17:11 So, 10:17:14 here I show you your schematic representation of the PISA secretly implemented in their pie sensor issue piece of cell cost a census data from an amplifier and shop shopping a stage. 10:17:37 A discriminated and digitization in every piece of these also pushing get to a capacity for tests and collaboration purposes. 10:17:44 Also, the data session included three destroyed registered to recall multi event buffer and the front end on Discovery network continuities. 10:18:10 The output of front end, as the second time around to microsecond. Why did this community to have a typical duration five microseconds that the buffer flung into and they discriminate to another the delay, which need to be sitting up according to the 10:18:14 three allowed to see in case of three or more religious talk about that in the, in the future. 10:18:21 Finally, lashing of the discriminating heat in the store after Easter is control about the grower stops He not only essential. this is a stopper. 10:18:30 The very thing is also say be in one of the three fingers to slice it. 10:18:40 So talk a little bit about the market readout. He showed you a 3d schematic of the shark coalition Fisher for 2.2 piece of ballroom. 10:19:08 Very nice. Fisher. 10:19:01 For the magic readout. The pieces are organizing double columns, each one, having 1024 pieces, as this is showing here in the right hand. 10:19:07 Picture center part of Asia column is occupied by the priority in color secret with a propagate the address of the heater pieces to the paper. 10:19:19 In this. So in there is not cloud edition voted over the matter is on the PC, without heated the knockouts any activity in depth without. 10:19:33 Here I show you the, the full or empty empty now interracial change, which is a separate team, three location according to the variation. 10:19:42 Also the, the MTA film anchor is divided into modular without unit here, show you in the picture. 10:19:52 And the issue without unit we can talk among entire staff, including power to the sensor to over Cousteau my power ball. 10:20:20 The stage is connected to the readout unit by turn a tweener center cable which vote nine data into one cloud on one three equal to senior why you should read our unit is collecting a big article danger to use the server govt protocol to the data aggregation 10:20:21 model with our hotel in the acquisition saber for that we user The, the French car developed by that collaboration. 10:20:34 So, here, 10:20:38 show you and presented it to be more about the WWE nature, which impeachable was the Center for the Alice idea data. 10:20:56 And he will operate in the middle evaluation environment, and he hostile a manger is wrong FBA show you here, which is the main program or device to operate the quality say so, our manage the data stream there without unity will also include a micro semi 10:21:09 Flossie fresh FTP year for his screen scramble coffee ratio the auto scale if every year. 10:21:25 DVD Fabio to receiver to photometer on to receiver that we operate that BB lot data rate of 2.2 gigabit per second, each additional we have also inside the shipper hundreds look on to our data with the SEC are for monitoring and control the readout you 10:21:52 need to for support temperature. 10:21:54 to support temperature. 10:21:55 Current. We offer messenger for connected to the power more control is the interface with the Powerball on a combined interface. So powerful in different Giza UBA is not available. 10:22:12 The main Fisher unit as we used to mention you just come to the sensor and surprise you with a clock on trigger and then he will return the data for the sense of formatting and optimize data packet for the DVD transmitted to the medical in order Hunter, 10:22:50 on receiver, or tickling years for a ba, ba, the astronaut article is the envoy for API is a keen to escape. 10:23:03 And the co host Cousteau form factor diamond Mr in the car. To show you in the in the day. That was fishing, as the interface between the ferry car on the tree timing control system. 10:23:20 This quarter fairly we propagated The, the TTC information to the readout unit or two one or the article links and the data receiver for the readout units for market and forward to the very hospitable to the MA, 10:23:37 Then discount. We'd also flu shots interface between to the detector control system on the unit. 10:23:48 As a multi channel. 10:23:51 Yesterday the MTA detector will be integrated into this training system to live out to the data flow from the MTA detector order for a readout uniter stage will be speeded in six independent credit card costly by say six duck several water we go here, 10:24:11 the buffer data compression. 10:24:27 We have developed a prototype of the filter plugin to interface with the xe darker system in this fantasy, and also a basic decoder monitoring of the app by that data. 10:24:35 However, financial are still in production. 10:24:42 So, how do the NBA will be integrated inside the, the day is furniture time intrigue assistant for that, the MTA detector will receive the trigger and time information from the center grower time and model, what we call ADTM. 10:25:02 This information is currently the delivery but the UTM and continuous broker for 120 beater, the ABMB to encoding. Other way of six time to Derek Rocco, however, do it to without unit architecture. 10:25:20 We need to fully make our house Crocker mean to it a secret aka to obey the UVTXRCX locating the unity and synchronize the hood MBT HTC tend to a unique Cracow. 10:25:43 So, what we are proposal for possible solution to help is to center the data to the NBA, the higher frequency, with you meet users sent the same information, but a full time to the ADC Crocker. 10:25:54 And then we add some comas into eight o'clock central to keep it to a great day in time. 10:26:01 In this way, we are the information from the GTA may receive the inside while the clock. 10:26:08 In this quarter the free world for the UTM war has the developer and similarity show you here in the bottom. So on the way from for the simulation of resort that data show you how we operate the input from a standard ATM data format to what is sent to 10:26:31 the meds. The main the negative impact is that the trigger attention is increased by about 240 nanoseconds, with this not significant in principle combo with a few microsecond for the no makeup value and also from the micro cycle or the integration to 10:26:47 time that we just because a little bit later 10:26:54 so impressed about the empty actor, and can operate into model. 10:27:08 The three a model where the pieces are lusher to reassure us the window, followed by without. And this is based on the stand up trigger. 10:27:14 Here in this diagram I showed you a representation of this mechanism when do you have a different event a correspondent to the traditional event, and they enter into a red light, and then you have some integration time. 10:27:33 And then when you have one to trigger for the two year system, you yourself, activate this drop the sexual desktop, producer, the last year or the PC and the readout to the, to the system. 10:27:51 Here we come in the case with this finished UTM data we expected nothing to the three year account, six a microsecond from four to six month, cycle. So these need to be taking consideration one day to day operation toward the fighter, to increase it a. 10:28:12 The integration time larger that this value. 10:28:17 The second model is the continual motto the triggers model. In this case, we use the same topic is travel window. We sure internet stopped at about 100 nanoseconds. 10:28:37 To initiate that without and hear us in the in the bottom, you'll you'll also see miracles presentation here we offer this simple distribution and everything from the conditions way. 10:28:47 And he always show you that we sent your spirit stopper to activate the lashes of the PISA 10:28:57 D here we can't, for example, use a song to stop a delay a struggle and about 20 microsecond a mean to want to stop a pair. 10:29:13 But arbiter. So, in peace about we was supposed to Toronto, the MTA detector of the trigger model for that we need to. 10:29:25 To find the ultimate cooperation parameter in time to the integration time but larger done despite the xfinity legal accuracy. 10:29:41 From the study of the by default setting, give you around. 10:29:47 Wine tradition time around five microseconds. Here in the, in the bottom floor to show you use a normal piece of bread event as a fish on the day of the top 10:29:59 here in the in the in the left for the standard parameter, we noticed that these five microseconds integration time in that idea for some optimal parameter found in the, in the land of the lava, using song to push the Last Supper. 10:30:23 And then he, here we confirm it that we can sit in this parameter is to answer the integration time to up to for a microsecond, it's enough for the, for the trigger Latin to the finish. 10:30:41 However, we can see yourself in carbonated that. For that we need to decrease the threshold which increase also the. 10:30:49 They could decide for the heater. 10:30:51 And also Denali so. 10:30:54 So, in order to recover the analog shipping time and the tuning for this funny grower trigger lattices choice. 10:31:03 And then we present the tune in traditional setting for lower grass multiplicity. We decided that we could put effort on it, and continually doubt. 10:31:15 In, in addition to running or the attacker system for the MTA the identity and the TPC in continuing with our model we record, about what 100 times the highest at at statistic for minimal by a PP coalition with we include a heavy fair you're measuring 10:31:34 kinetic revenue. So, while the combinator, the question about run it in the container without is how DC, how they time it would be increasing in decent condition. 10:31:46 For that we used to do some Monte Carlo simulation they started to estimate the controversial the pressure of multiplicity for giving up by Pisa unconverted compared to the time without some factors that contribute to the society than the way to gain 10:32:08 So, in first, Eastern the coalition multiplicity on the PDS are using the simulation here to show you the sharp particle multiplicity of the future. So look at into Hi. 10:32:37 would use in this kind of simulation. Then we have also the number of pilot coalition, which is nice to go to the Poisson distribution. The number of noise heater, that we use. 10:32:41 Use it the thing to the man of secrecy with is the final the latest resort, find by the cut a section of the pie pieces down the r&d cell. 10:32:53 Also this is also this you what to do by bus Poisson distribution. 10:32:58 And we also consider duplicate the heater, doing the integration time or the front end, are including the international window or the next is trouble answer. 10:33:12 And in the simulation. 10:33:14 So, here is show you just a resource for pp. 10:33:24 The simulator showing the PP case. I mean, the people multiplicity Paris drop. Look into the ship before with the cheaper center to the staff in these protests showed you the, the number of Pisa shipper multiplicity. 10:33:39 multiplicity. And in the, in the writer, the probability. 10:33:45 On this multiplicity. To for the three layer and better. 10:34:10 We already commented that we use a testament to the, to the minus six for the noise paste, stop here. 10:34:18 We just go see the taking in around 25 nanosecond were heater, with the smaller, more or less, presidential the readout time for the heater. I mean, the time that this cheap cheaper needed to send out one heat information to the readout uniter, we can 10:34:41 see that the continual more explosive it. Also, not only for 10 microseconds. 10:35:00 in this world that readout on the better stopper legit and five microseconds later 10 to the minus five. 10:35:09 And this is only considering a to buffer from the duplication on the PC. So, the. 10:35:20 diesel priorities also find it we take our calendar we have three buffer. 10:35:26 So, here is the same simulation resort that, but for the Google condition. 10:35:35 In this case, reducing the international window for to micro cycle is okay. 10:35:42 Linda hi flushing of noise on the highest ratio in the pile up. 10:35:48 For example, we used to go to five microseconds without we are for 1% leverage change that. The third buffer is needing to avoid the loss, however, is good to know that all the discussion is driven by the day or this distribution. 10:36:04 We need to be well understood. 10:36:08 And then we need to include also a detailed simulation of the data flow on the east D'Antoni right to to get a final decision however is seem to that the danger microsecond $3 topper should be working fine. 10:36:27 However, as I mentioned this needs to be the largest more study to to the simulation. 10:36:34 So, now, yours. 10:36:50 Let me move over to, to the SEC on this presentation in principles. 10:36:43 So how aspect of different distributed out for this famous MBA detector can be used in the ESC the data for that mean since the beginning Londo has shown interest in the study of heavy for showing up. 10:37:08 Iron conditions, etc. impossible to understand the Newcomb manual effect on other productions of the modification of on initial nuclear PDF, on the finance they're doing says your process. 10:37:14 So, taking consideration so requirement for the silicon better tracking the data of the ESC, such as a lower my motto your budget, how you handle it. And also the first time it without capability in the map cynical sister current to the end of being other 10:37:32 other sounds good ocean to be using the basic the silicon detector the ESC. 10:37:40 So, for the study of a heavier, heavy failure for that show for what a big law no half proposal international for for what city contract at the data inside the ESC, as he showing the few in the, in the emotion. 10:37:55 Here in the left bottom. 10:37:58 I mean, the FST proposal consists of three brains of my political debate on to foreplay of high water mark a silicon detector. 10:38:19 Some ocean Kezia for the three layer of the data. He's based on ACL gotta Martin IPS three sequences. 10:38:22 The 10:38:22 Eau de, de, I mean, Adi say so, or show our own the r&d. 10:38:29 In, right now, so dang easy on the DC Atlanta, where we are currently uploading the with our oceans. 10:38:40 However, we are starting to introduce us on a 10:38:47 feature on the young for simulation fan study I've been focused on the attacking especially solution performance, and more realistic complete description would be implementing the simulation one that we serve under the current a session or a Bible. 10:39:13 And also, we should get also some output from the readout on how we come to use implemented also estimated out on these in the FST detector. 10:39:15 So this led me to arrive to the the summary we have here. This one is essential. I mean, firstly the MTA plan to operate in continuing with our model they want to Monte Carlo simulation showed that to continue with our most for me to a detector is processing. 10:39:35 These continuous without avoid a time and by us to do it to a standard techniques. In addition to the silicone master is recognized as a, as the leading technology for the ICD 10 application, and the learner festival proposal we have like the input of 10:39:50 the heavy Frasier Frasier there is. Thank you for your attention. 10:39:58 free car there is. Thank you for your attention. Thank you Yasser that was very good talk free complete. I appreciate your time. You don't see any hands up, but that doesn't mean there's no questions. 10:40:09 And we're chewing into the coffee break but not not too bad, there's any questions please unmute, try to make it a discussion. 10:40:23 If not, I'm going to suggest that see it's 1040. 10:50:23 Welcome back. Hello Dolly is how are you. 10:50:27 Hi, I'm good. 10:50:29 Ready to present, off you go. 10:50:37 Okay, let me share the screen. 10:50:37 Okay, So I hope you can see it. 10:50:42 Yes. 10:50:44 Hi everybody. 10:50:46 So I'll be presenting. 10:51:08 bit about our company, what we do. 10:51:13 And we're approaching the goals of developing those to a six and then I'll go to the details of the, of the 6861, which is a 12 bit set to travel. 10:51:28 0.5 gigabit. 10:51:31 They see. 10:51:32 And they sick to which includes that I'm building back end with a digitize it in front. 10:51:40 Okay, Here's our headquarters, our company, which is headquartered in Los Angeles area, Culver City, and we were incorporated in 2006. 10:52:04 And the main of main focus of the company is to provide ICM basic design, development services and also turnkey solutions. 10:52:09 So the design services include circuit design simulation physical design chip assembly integration of the blocks, and they bought of course turnkey solutions, besides the design includes also the chief fabrication logistics packaging packages development 10:52:32 packaging PCB, which is used for testing revelation of our chips and of course testing characterization. 10:52:44 Were using our lab for that, and delivery of chips. 10:52:52 So, yeah, says, As I already mentioned those, those ships, those basics that we developed for that that the signal processing, they, one of them is an array independent that a have developed a disease, they are the is the array of event building digital. 10:53:17 There's still blocks, which is the signal from a of a disease. 10:53:26 Here, here are diagrams that showing. We're always expected the system. 10:53:32 So the first basic is as I mentioned a just an ADC, but all those 32 channels and dependent so it can receive signals from data to the doctors digitize to develop it resolution and feed to do FPGA or wherever. 10:53:55 It has to be said. 10:53:56 But the data stream is huge. As you can imagine. 10:54:02 So, in the second basic what we did we built in the event detection and building capability, which follow the digitizing, in this case that digitize is lower digitizing speed. 10:54:21 And then, the data is is compressed because it's only transmitting the event related data rather than all the samples, 10:54:35 their motivation. So, as we understand the doctor count is increasing. 10:54:43 Always, and therefore, there are new requirements. And these requirements include the site is shrinking in size. 10:54:52 Therefore our a six combined the two channels power consumption must be reduced. Again, the basic one is consuming 25 which is 25 milliwatts the channel and basic do is is 4.5 milliwatts. 10:55:10 The doubt counting in the, the sky speed interface and reduce via congestion congestion. 10:55:20 So, our basics they have high speed serial interface. The one and the second basic, and also they have features that permit sharing out with buffers for several agencies do four or eight, and therefore we can reduce the number of of lanes data lanes, 10:55:45 while a sick do it, it includes the event building back end, which transmits only, only the event that related data therefore it's compressed and the date that it is low in there. 10:56:01 as we understand include their digitizing accuracy that that has to be a sufficient so we we have 12 bit resolution built in the sampling speed in a sequence, it is up to 0.5 Giga sample. 10:56:25 And also, as we understand that their minister latency is important and then as low as, as, as we can get so we have low latency in basic one, which is it nanoseconds. 10:56:40 Okay, the project goals. 10:56:42 So, that we develop the chips fabricated chips developed a special chip carrier package them develop the capacity for evaluation. 10:57:04 And then the GUI for, for, for testing automation and death characterized as exams, of course, offer them both nuclear physics and high energy physics communities, and the industry for using in their systems. 10:57:17 Now, a little bit more detailed overview of basic one. 10:57:24 So the targeted performance is well I mentioned already some of those numbers so the input differential assuming 0.6 will be big, they expected enormous done bit. 10:57:42 Their bandwidth 250 megahertz. 10:57:47 And the a six they both include GST 204 be standard data interfaces. 10:58:00 Their control interfaces i squared c. And we can package them in DJ packages again to say we area. 10:58:09 And the chips I implemented using 28 nanometer CMOS technology. 10:58:16 Here is the block diagram of basic one which is, as I mentioned that just and then a of a disease, but independent say this is. 10:58:26 So here we have a disease and output buffers, the PLO, which is synthesizing the clocks, and some support blocks that include like voltage current references temperature sensor serial interface for control and CPU for calibration purposes, 10:58:55 how our AC compared compares to what's available on the market. So here, here's the comparison is only for 12 with a disease. 10:59:06 So, there are agencies that have set it to channels, however the sampling rate is low. 10:59:14 And also that I this is that sampling. 10:59:35 power, and also we have an option. Well, operation mode. 10:59:43 When the latency is very low. 10:59:49 We are showing here the fabricator chips. 10:59:55 So this this is the, the chip photo and some zoom ins into different area so this area is where we have one in DC channel shown. 11:00:09 And this area includes the supporting blocks. 11:00:15 You the serial interface below temperature sensor. 11:00:24 We developed a special chip carrier, which is used for both phase six, and this chip carrier is in an 18 by 18 bowl at a 0.8 millimeter bowl pitch is the cross section. 11:00:41 This one is is for for another within the calculator section for this specific, but it's it's exactly the same, just the bulk count is different. 11:00:52 So we have the chip here. 11:00:57 Lift and bumped, and have some components that are capacitors, which are used for power supply bypass and reference voltage bypass. 11:01:22 And the interconnected within the chip carrier were carefully assimilated, and we achieve with good as one one that's the one up to high frequencies which are epic with more than adequate for, for those to a six that, that we're presenting here. 11:01:38 The test set up. 11:01:40 Yeah. The board is here for revelation of those of basic one right now, I'm talking. 11:01:49 So, here we have the differential input connectors placed at same distance from the a sick, which is our house in the custom custom socket. You can see it here a little bit. 11:02:07 And then output for the way because they're less critical to the nation. 11:02:14 When signal travels for those races on the PCB, where he was Megatron six city material for these boards. 11:02:26 k testing results so first Yes, the power consumption. We have total power consumption that includes also the GST 200 for the interface here that's per child 11:02:47 PLO, it was tested as well so we targeted 16 gigahertz frequency we've got a little bit lower, but we have some bands that allow us to reach that target that frequency, no issues there basically we measured the face noise and integrated the face noise, 11:03:09 where this range and got 144 times a second centimeter, which is, which is, which is ok for 11:03:22 for them. 11:03:23 Bit in. 11:03:27 And we measured the SNR we targeted above 60 Db however in this first prototype basic we were able to, we were not able to achieve what we targeted so we got low SNR. 11:03:47 And therefore, enormous is lower. 11:04:01 We get about 0.9 point five, or in orbit. Five megahertz and it was 16 at 250 megahertz, which is like with frequent NyQuil. That was frequency for 500 mega sample etc. 11:04:21 Then, if we calibrate at a different frequency, we get, you know, at that frequency, the best, you know, specific frequency. So, and then when it goes down, let's go down. 11:04:37 When you can see different, the bandwidth we achieve the bandwidth that we we targeted. 11:04:52 What else now yes future plans. So, the first prototype, a sec has seven deficiencies, the enormous lower. And there are some some other functionality issue so that need to be to be fixed to be addressed. 11:05:12 Therefore, we need to find funding for for the post phase two. 11:05:16 As the case do tasty project to redesign the chip increase its performance fix issues fabricated and retest and prepare adjust their data sheets that we have right now, and provide the part for for communities that are interested in using it, and commercial 11:05:41 customers. 11:05:44 Now, some more detailed description of the AC do. 11:05:52 So we have set a two channel event building digital core built in, and the digitize and in front. 11:06:01 And then, while digital by birth was developed in collaboration with LBNL specifications, again, 32 channels. 11:06:19 It also permits programmable sampling rate. 11:06:24 With the same clock supplied so but you know if Glock frequency is low, so we can have any, any sampling rate 11:06:35 in our targeted smaller than 10 bit. 11:06:39 One will be big differential input signal. 11:06:43 And then we have programmable low pass filter in front that permits to get cut off frequency within this range. 11:06:54 Yeah, it's good to see interface for control CPU for calibration integrated temperature sensor. 11:07:01 Here we're serving 32 channels that include ADC that include firefighters event builders, and 11:07:11 cheered five, which symbolizes for sterilizing, the, the, the event related data and shipping out that literally compressed compressed data rate compressed data because it only includes their events. 11:07:39 Doesn't stream out all the data that is doesn't get information. 11:07:41 So in addition it has a direct data out output for the ADC. 11:07:51 We have two modes of operation here, we have the GSD and bypass mode that permits to reduce the latency. 11:08:01 But this data stream of course is much higher. 11:08:06 Yeah, the supporting circuits again the serial interface is here calibration CPU bias locks reference 11:08:18 the raw data, but it has the possibility to combine either to four or eight channels of a disease, therefore the land count, which get a signal can be 60 and age of four, or just for for 32, a disease 11:08:47 power consumption is is here so we have 4.4 4.5 million worth of channel. 11:08:56 The data, data interface and we if we operate the, the AC in a DC mode. 11:09:03 When there is direct data output from the DC so in that case, of course, the power consumption is increasing. 11:09:16 There is the back end in more detail. 11:09:21 So we have, we have the digital competitor. So data is coming from a VC. So that is a digital asset threshold. Once the event is detected. So then, its characteristics are determined in here, and then all data, well data from all channels is coming into 11:09:42 into this FIFO data is combined serialized and set out to you our interface. 11:09:55 OK, so the frame. 11:09:58 For each for each event, we have. 11:10:05 We have fame, which includes 130 23 bits, and then started with this out and stop it. 11:10:13 And so we have parity, for example, included in the frame. We have, while in the event declarations as one or zero 11:10:26 channel at the time of arrival time of big time of threshold threshold sorry the peak value that channel threshold that was set. 11:10:38 And for us, it's that indicates when it's it's overflowing so we can detect up to develop on four kilo events but a second better that channel. 11:10:54 And here is the dashboard. 11:10:57 This is the chips were showing different areas of them, and the bowl which is very similar to the board that was used for a sec one. 11:11:13 Same inputs here outputs here, all our differential high speed, relatively high speed inputs are just going up to do candidate. 11:11:18 Sorry 100 megahertz bandwidth. That's what's required testing results. So, this is the SNR normalized, when the inputs amplitude was increased, depending on how much of a donation we get. 11:11:38 And we measured this or not at 56 Db 5657 So, which is approximately nine bt knob. 11:11:47 Well, targeted this downbeat so we haven't yet achieved so, yeah, the bandwidth that we actually have this as expected, we can program the bandwidth by using the built in low pass filter. 11:12:07 The spectrum, which shows that says Dr. 11:12:11 So we have Dr. 69 db. 11:12:16 And this input frequency, while we expected to have highest of Dr. 11:12:23 So again, there is, there is a need to improve it. 11:12:29 Here we are just showing how how it digitizes and and describes those spouses, that are mimicking the pulses that I expected to be coming from from the directors. 11:12:44 So here here to here is just so damn of the pulses coming into the AC. 11:12:52 Just so dumb of the pulses coming into the AC. Here is pulses that are out directly from, from a disease from digitizes. 11:13:01 And here are the framed balls. So, each, each both has description here. 11:13:09 Like, time of arrival here, 11:13:13 time of peak 11:13:16 time over threshold. So this kind of indicates, you know you can see the first is never so small amount is here and these are longer, so you see along and then and and so. 11:13:33 So all this data is is framed and shipped out through the low speed work interface. 11:13:40 Future plans yes the literal dimension, we, we need to fix some issues, on, on, on this a sick, and so we are looking for funding, and hopefully we'll get it, and then we'll, we'll be able to produce the second prototype, which we expect to how all day 11:14:01 should fix that that we that we measured and and and resource. 11:14:11 By now, and that's about it. Thank you very much for your time, and thank you for the opportunity to present here. 11:14:21 And so I'm ready for questions and if I cannot answer some questions so please email to me and I'll direct those questions to the engineers who were directly involved in developing those days it's. 11:14:36 Thank you dice that was good update, appreciate the nice slides. I open the floor to questions right now I don't see any hands up. 11:14:50 But if you go ahead turn your mic on if you have a question. 11:15:01 If not, that takes us to the next talk, and I understand. 11:15:09 Thank you very much Julius. 11:15:14 It's listed as Dr. Andrew Weil Levi from alpha core, however ASCO will be giving the presentation. 11:15:24 And you emailed me a talk. 11:15:28 So, we don't see it, where by this is Andrew fo is having some technical difficulties and get a lot. 11:15:40 So, yeah, let's see, do, do you have the, we did email it to you but I guess it's being shared from people's computers right. 11:15:53 Hi, this is Doug. 11:15:54 I just checked my email, and I have a talk from ASCO. I'll, I'll post it today, to the page, or sorry from Andrew levy. 11:16:11 So, Give me one minute and it will be there. 11:16:16 I hope. 11:16:41 for the delay here. 11:16:48 Understood. 11:16:54 Things to hey it shouldn't be there now if you refresh your timetable. 11:17:02 Chris 11:17:05 Yeah refresh Doug I don't see it. 11:17:09 At least not on the list I downloaded it so maybe I can share it. 11:17:14 There's ASCO, I asked go I. 11:17:18 You hear me. Yes, yes. 11:17:21 Let me. 11:17:34 I could share my screen now. Okay, sure. 11:17:38 Access go. 11:17:40 Yeah, so. 11:17:46 Yeah. Good morning or good morning everybody. This is update from alpha or on our read out. 11:17:57 I sees for nuclear physics, high, high energy physics experiments. 11:18:08 So, we have basically have two programs. 11:18:14 One sponsored by do a nuclear physics SPI or another one with high energy physics SP IR, and they are readout electronics on either 90 nanometers 280 nanometers see most or 22 to 28 nanometers CMOS. 11:18:33 So we summarize. 11:18:36 Both of these 11:18:38 IC IC portfolios, 11:18:46 our, our goal. 11:18:51 In this DOESDT or face to God multi channel readout ISIS 11:19:02 has been to the low cost under an ad 90 nanometers 200 1800 meter readouts, including an hour phantoms and digitizes. 11:19:16 And on the other one to do very high performance at the same time, low power at dice 11:19:35 And in this. 11:19:38 Some of these are currently available as their sports or evaluation boards in some of this is in the form of IP, that we can build 11:19:54 readout easy to the customer specs. 11:20:04 So here is to quickly summarize 11:20:11 the 22 to 28 nanometer see more program. 11:20:17 We, we have no silicone evaluated five different designs for digitizes and and these are fully continues at CES. So, so basically you can stream everything in real time to the FPGA or to the. 11:20:39 Whatever your next places if it's another AC. 11:20:52 Or you can. We also have this possibility of putting a FIFO or and larger SRAM memory on the cheap, where you can store few thousand samples for every channel. 11:21:00 So if you want to do that, that way and then have a lower rate data stream from your AZ to the FPGA. 11:21:12 And so they these 22 to 28 nanometers, they vary from 300 mega samples to five Giga samples and or, so the power is for continuous operation. 11:21:28 If, if we were doing more of a waveform sampling where we have an output memory in and we that we allow some dead time then the power could be lower. 11:21:42 And then the 99 a meter hundred 89 a meter. 11:21:48 There are there are agencies that are not Giga sample range but more like 50 mega sample hundred mega sample, up to 14 beats. 11:22:01 35 milliwatts. 11:22:05 Then beats at seven milliwatts. So, these are done in a very low cost so you must processes so they may be attractive as well. 11:22:16 So then, a few words about our 22 to 28 millimeter ADC library. 11:22:28 Here I'm. 11:22:30 Let's go back. Yeah, I wanted to 11:22:35 say again that that either these can be used in real time streaming that everything goes in, everything is digitized in real time, or, or with a memory on cheap. 11:22:50 And then we go to the best results and, for example, then bid 500 mega sample has been tested, up to 11:23:04 one gigahertz input and it provides a knob between eight and nine bits, very low power for continuous operational only about 1 million walk. 11:23:31 We have all those numbers if somebody is interested. 11:23:34 Well, actually you can you can kind of see. 11:23:38 I didn't yeah it's 43 micrometres by hundred two, so it's even much smaller than I remember. 11:23:47 Another one is nine bit one Tika sample, providing, little bit of lowering all around seven and a half. 11:23:56 This again layout 58 micrometres to 126 so if we want to do, multi channel devices. They don't think a lot of area. 11:24:08 This is, this is tested for input bandwidth off. 11:24:12 Two gigahertz and then. 11:24:15 That's not the maximum at all that's just where we where we tested to, and then we also have input buffer technology to extend that input bandwidth to 10 gigahertz or, or 11:24:30 go much higher number, 11:24:35 then be 2.4 Kiko sample was also tested the layout size of daddy's hundred and 64 Bye, 600. Roughly micrometres so this is this is a little bit larger because this is diamond believe eight course. 11:24:58 And, and it only 6 million watts of power and continuous operation enormous Barone a 8.3 8.5. 11:25:20 In this. This one is the one where we have tested our calibration algorithm for time into Linux burst. And that's an on cheap calibration circuitry. And it seems to be working quite well. 11:25:27 Here's like a plot where this Walden chart is for for continuously digitizing ADC. 11:25:59 and power all combined into one number. And that's when it's a real time ADC 2.4 Kiko sample, you know, eight and a half and power only 6 million watts, that those that combination blazes you really, really high on the wall denture. 11:26:23 This is just another way to do the same thing called Schreier chart in. 11:26:34 In, somebody, some people want to see that way but again we, we are quite competitive. 11:26:44 Also mentioning that we are IP partners of GlobalFoundries. 11:26:49 So, we make our IP with the most stringent design rules for very high yield and high reliability. 11:26:57 We even simulate for automotive applications. 11:27:00 So many of the many of the other agencies on this Walden shorts or, or, I could be from academia where they don't necessarily have to design for high yield. 11:27:12 So, so that's, that's another benefit of our, our the sides. 11:27:19 Then, those were the our most advanced this advisors 11:27:27 that go up to five Giga sample, then I want to just quickly say a few words about the other program where we were doing in lower cost hundred 80 nanometers 90 nanometre programs. 11:27:40 There we did this to four charts and city amplifiers. 11:27:48 And we 11:27:52 hear a couple of the test boards and a DI micro graph for them. 11:27:58 Measurements set up 11:28:01 some of the test, test results. 11:28:06 And 11:28:09 then, 90 nanometers 200 hundred 80 nanometers at CES we also did in this other program. 11:28:19 The flagship is 1450 mega sample ADC 35 million watts of power disease. 11:28:29 This is again, continues a DC it's not a waveform sampler but this is a, you can push all the data, you can digitize everything that comes in and puts it out. 11:28:41 So for this one we have this chips available. 11:28:49 Then, we did another one where we were trying to push the power down 489 a meter, a DC. 11:29:02 And this does not have on cheap calibration. So, that's why the enormous only 7.3 bits. 11:29:11 It's 50 mega sample seven milliwatts. This may still be enough for some experiments. That's why I went to solve this here. 11:29:24 Yeah, so we also made a cheap interface and 80 nanometers, low power. 11:29:32 God SLBM be yes that is still 11:29:37 much lower power than lbs. So that was, that was tested in and there's a PLL for, there's a serialized there as well. 11:29:52 And a general purpose, ie Ll that allows generating the clocks on cheap. 11:29:57 So you can you can set that data transmission clock rate. 11:30:07 Different levels, if you want to, that burns some power so if you want to say power there that's a possibility. 11:30:14 This is also tested on silicon. 11:30:20 I was a little bit faster than I thought. 11:30:23 I. 11:30:25 These are this is what's the summary of all. 11:30:29 Did I sees that we think are the best matches to this community. We have also some ongoing work on Red Heart optical transmitter, 11:30:41 which would be too, and and this is 28 nanometers, the most hundred Giga bits per second rate. So, there are not many right hard optical transceivers available so I kind of wanted to throw the question to this community. 11:31:00 What are the plans for these diaper chips, and whether we are working on would be a match. 11:31:08 I'd like to hear some feedback on that. 11:31:13 So anyways this is this is basically when you need to combine from a lot of detectors to data digitize data and then push it through fiber to your control role. 11:31:30 Yeah, so then summary is, is we presented the results of two programs, both of those programs have the funding has ended. 11:31:45 This past summer. 11:31:46 So, we are, we are currently just on Iran funding on these, we are looking for more ways to get funding. 11:31:56 So, if there are any chances. We are interested, and you yeah we are many very interested in feedback. 11:32:09 And we'd like to acknowledge these to do ESP Our programs are the grant numbers, and the people supporting us have been manouchehr farrakhan dl Marcy's can we sell seen. 11:32:26 Actually I wanted to mention that we have a chance to continue this work with face to be program. 11:32:35 And we are submitting the proposal next week. 11:32:38 So, if we were able to get some letters of support that would be that would be great so we can, we can still keep optimizing this technology. 11:32:54 Yeah. 11:32:56 Thank you very much for the opportunity to present, and now if you have questions. 11:33:03 We are available 11:33:06 to answer. Thank you. So that was very nice update. 11:33:13 A wanted to extend my appreciation for your continued involvement in these workshops. 11:33:23 It's been several years now, that's, it's good to see the progress of it I had a question on availability or sales dude you do have a customer that you know has actually purchased me your latest develop I mean, chips right i mean it's. 11:33:36 I've looked at the website and really haven't requested a quote, but how available are some of the artists now you know not not not the eval boards, but the actual actual chips package chips. 11:33:55 Yeah, I have to say that our, our commercial push right now is more for the IP licensing type of deals. 11:34:04 we've taped out, or more or more of a desk chips. 11:34:24 But, 11:34:27 so maybe. 11:34:29 And some of these. So, we haven't we haven't sold chips to customers. 11:34:35 We have, like, a few of these on a, on a cheap world that has interface already, or on best boards that we could provide to community members. 11:34:51 But, And I would say that. 11:34:55 Maybe, maybe couple of these lower like 180 nanometers 99 a meter ADC could be already something that could be applied as they are, that are not not another day boat is needed. 11:35:12 But for example if people want move die channel chips for those. 11:35:18 Then we would have to go to another data center. 11:35:24 So in the in the IP world we are licensing these course to know companies in from defense from five D from, from these type of marketplaces. 11:35:38 Now that's that's good news. 11:35:41 Want to open up the floor, I don't see raised hands but. 11:35:46 And we're effectively back on track, maybe a little early even 11:35:53 if there's no other questions. Thank you again. 11:35:59 It's good to see you. 11:36:10 Yeah, good to see you too. And I was your places in Tempe now right is. 11:36:09 Yes. Yeah. I mean, we, we started in Tucson but now, mp and we have our own facility. 11:36:21 Building a nice test lab here and so if you are around please come to visit, that's right for the campus so that's that's a good Yeah, yeah. Very nice. 11:36:31 Alright so the last talk of the session is ECR knowledge scientific. 11:36:40 And I believe the talks up there. 11:36:44 If not you can share. 11:36:47 Morning. 11:36:49 Can you hear me. 11:36:50 I can hear you. I moved it out but 1520 minutes he stars hopefully it's not too early, the Aloha state but Good morning, I guess. 11:36:59 Yes, this is fine, thank you. Thanks for. Thanks for the invite. And thanks for having me, and also considering. 11:37:23 It's pretty. It's much better than 3am so I'm quite happy with this. 11:37:13 It got away. All right. Do you see my screen. 11:37:15 Yes. 11:37:16 Okay, great. Well first of all, thanks, thanks to everyone for him Thank you Chris for inviting me and also it's really nice to see some of the colleagues and collaborators here and also Cisco and team and Dallas from Pacific microchip be we do communicate 11:37:31 over such meetings and compare notes so it's nice to get together with everybody here. 11:37:38 Yeah, so I'm here to talk to you about the front and my micro electronics that we've been developing over the past few years, work has been mostly funded by the do e f science NP and HP programs. 11:37:51 And there is a previous version of the slide that I also showed in the previous SRO meeting and November. And so, the link is also posted here so I tried to kind of create a differential slide deck here and not repeat a lot of content but there's still 11:38:08 there's some intro and other matters that that you'll see repeat itself. 11:38:12 So we we designed these events based digitizes and the SPS and a single chip essentially from left to right, people ask me what we what we do all the time and so we are we're trying to vertically integrate ourselves into the space. 11:38:35 4032 channel solar scopes and chips you know one to 15 Giga samples per second. 11:38:35 The key word is, is to be really focused on the events and so that that way we can save on size we can power and cost and chip itself might be good but then we also need to integrate it so we're creating user friendly firmware and software tools and also 11:38:53 evaluation boards to be able to try these next step for us is to integrate these, these readout chips here's an example of a package version, and how the insight looks like essentially a Microsoft microchip day micro graph essentially would integrate 11:39:12 these microchips with detectors silicon pn Ma Ma panties LAPD and you know any any sort of a rate type detector that might be able to, that might require you know PCB works and clever mechanical work to to be able to demonstrate some use cases. 11:39:32 And then obviously the main application that we are aiming for is NP and he asked for particle physics applications, large channel count, lots of data, and where you have constraints in power and size, and also cost becomes becomes a factor. 11:39:50 Other applications of you're aiming for that we have funded programs on air are again within the field of HPNP the beam diagnostics. 11:40:00 Plasma infusion diagnostics and also imaging and LIDAR. And so some applications medical imaging to that we're looking into a little bit about, about us. 11:40:19 So, people ask me where to work now Lou come from, it's, it actually means wave and Native Hawaiian language and, and we are digitizing waveforms so that I started the company that's, that was, that was, that's what came to my mind. 11:40:26 And so we are with, so far we've secured over $11 million in committed funding contracts grants and contracts from a variety of government customers, and also private sector as allowed us to grow to about 18 staff members, mostly PhDs and master's degrees 11:40:45 and Electrical Engineering Physics mechanical software. 11:40:50 We do have access to advanced design tools from cadence, and the like. and and commercial grade tools that allows us to commercialize the technology and the designs after we were fabricating those. 11:41:04 And our expertise are in the HTTP and NP and particle physics detection and tracking a lot of radiation detection and knowing the needs of the field, essentially, so that's that's how we got started, and then we realized that we need to design microchips 11:41:20 to solve problems in this field and hence we assembled a team that works together. 11:41:27 We're also working very closely with the University of Hawaii that's just a couple of miles down the road from us, and I was a postdoc of Professor Gary Varner. 11:41:37 And that's how I got involved in the field and scratch my head one day and thought somebody should start a company and try to commercialize the technology. 11:41:50 So far we've a little bit about our where we are. 11:42:03 As you know this is a very highly specialized field. 11:42:06 And there's quite a bit of institutional knowledge needed to be able to be successful. It's not just electrical engineering, it's electrical engineering, mechanical physics data processing software Ferber, and so there's a quite a bit of body of knowledge 11:42:22 that has to be really specialized for for the needs of such experiments. 11:42:27 And that's what we've been trying to put together over the years. 11:42:31 We work closely, creating sponsored research funding for University of Hawaii national labs that has allowed them to hire you know postdocs and grad students, and in some of these people have actually done first jobs with us. 11:42:48 People ask me why do you do this in Hawaii and, you know, my answer is that we are in a strategic location, because to Asia and us, so we get to collaborate with both sides, pretty consistently proximity universe of way the real world class facilities 11:43:02 there. 11:43:04 And also, the Hawaii brand. Nobody forgets Hawaii. 11:43:09 And we also managed to retain local expertise here people that graduate from the University are local here they, they, they get to stay in and have a job in a, in a living that that's, you know, also, not only feed their families themselves but also allows 11:43:26 them to because their families, and not have to move away. 11:43:30 The state being so tourism dependent. 11:43:34 This has been challenging, especially during the pandemic types of field of you're making a dent. And, and, in a small dent in the livelihood of people here in the state. 11:43:45 Moving forward we're looking at new possibilities. Obviously we're constantly trying to collaborate more and and us there are microchips that are some of them that have come to some maturity into new integration possibilities and and new custom designs 11:44:00 and also get into new partnerships, so we're open for business, please do reach out to us. 11:44:08 Over the years we've managed to secure funding and also work on a variety of system on chip is IT projects. 11:44:19 I start with the top one, as a SOC or a sock. 11:44:23 Essentially, you can see, I won't read the entire table to us, I will also be posted. 11:44:28 But this is kind of our oldest design. We started this in 20 1617 and, and we've made three revisions of this chip, these trips are all designed in 250 and 130 nanometers CMOS, which are rather mature in terms of foundry information. 11:44:48 So luckily the number of surprises that we see are declining I don't say there are no surprises but luckily the foundries have matriculated the data sheets and information that you can get see designers can can actually design something that will work 11:45:10 to spec. 11:45:13 The so the a sock chip is lower channel count and HD SSC is kind of the next big thing coming up. 11:45:20 And I tell you a little bit in a few slides that the the properties there aardvark is getting some maturity here and it's also subject of the integration project that will show some slides on a ODS is a high dynamic range rearrangement of a sock. 11:45:35 Also the new kid on the block essentially the straws chip the streaming autonomous way from digitize their with serious depression. 11:45:41 This is the 65 nanometer chip. We just got funded for phase one do under Nuclear Physics for this running at five Giga samples per second, and aiming for really revamping this during the design to have a lot of channels, and also being able to perform 11:46:01 not only data conversion but also data processing feature extraction on chip so we can really reduce the data that comes out of the chip and achieve high trigger rates without, without having to stream. 11:46:15 Everything or every way form essentially. 11:46:20 And a lot of these chips were developed with white collaborating with the University of Hawaii. 11:46:27 So, again, these are slides that that have been already shown so I can, I left them here for, for just for for circle purposes of direct for the record. 11:46:37 We've also developed GUI and software. So, if you want any evaluation version of these trips essentially it comes on it on the printed circuit board, maybe in a box. 11:46:48 And then the software and firmware that goes with it and then lots of buttons to push around it's a portable system works with USB interface, common to all knowledge ships, essentially, you can configure the dq can configure the chip theaters and push 11:47:04 some buttons and start taking data visualized real quick save the things that are important to export to CSV runs in Python, so it's portable and runs in, you know, Mac, Linux, Windows essentially no problem. 11:47:32 Again, some aardvark designed details. Again, these were already discussed now the new thing here is when we did some tests, some simulations essentially using the v3 of aardvark, and this was presented at the IEEE conference in November. 11:47:36 So we're, we came up with this kind of banking mechanism to to be able to achieve. 11:47:44 You know that timeless, or close to the time this operation, and in this simulation essentially shows that up to a certain. 11:47:51 You know trigger probability we can achieve that kind of streaming mode, zero, the timeless operation said requires quite a bit of clever digital bookkeeping on chip so this is all on chip processing that happens to to really keep track of the data that's 11:48:11 coming in zero suppress on chip, and that that's important because we are trying to avoid having to ship everything to FPGA or having to control coming from an FPGA in terms of deciding the zero suppression decisions. 11:48:23 So, if the chicken autonomous do this then obviously we save a lot on power board complexity FPGA requirements and so three key aspects. 11:48:35 And so yes so this the simulation shows that we're next step process is to tested, and I show you what what the candidate project is for testing this HD Association high density way from digitize their system on chip. 11:48:50 Essentially has some of these concepts incorporated as a 250 nanometers Sema so pretty low cost fabricate, we're aiming for, you know, high density 64 channels. 11:49:01 And this is a, this was a phase one deal yes var D actually fabricated in through a phase one SPR be fabricated in 32 channel. prototype. 11:49:11 64 channels will just cost twice as much so we funding was limited in phase one so I want a better team in the back and they did an excellent job of, you know, working through all the holidays sexually making this happen but the good news is this is now 11:49:25 funded face to SPR project so we're just about waiting for the award. To get started on this is a picture of a die and 16 channels on the left 16 channels on the right, lots of digital goods in the middle. 11:49:39 And, you know, signals coming in from the side, digital going out in the middle of essentially has really designed for say pm readout has gained by seeing things like that integrated into the chip runs at about one Giga sample per second. 11:49:54 And in your simulations show us that it can run up to you know 400 kilohertz of, you know, kind of event, essentially, and it will actually keep track of input data and and kind of suppress heroes and, and take care of important things there on chip. 11:50:15 So the next step for this trip is to package it to put it on a PCB and start testing it and as soon as it turns on. I'm hoping next few weeks we'll be able to get to that so it's quite exciting for us to, to be able to announce this and other effort that 11:50:30 that's going on is integration, we realized that, just having a microchip is not quite cutting it it's it's really difficult to go ahead and you know design fabricate a microchip and then is quite a bit of body of knowledge that comes with it with that 11:50:47 process and so we realized that we are essentially experts now. And so we we are the most qualified to, to put these chips on a board for a special application essentially so that's, that's where we have been going out and seeking funding to vertically 11:51:03 integrate these chips into board plus firmware plus software and all the engineering that comes around that essentially and so do these demonstration projects. 11:51:13 In this case, working with income to read out there. 11:51:17 Gen one LAPD which is a strip based. So for this case we have, you know, districts or 1528 districts that need to be read on both sides. And again, I'm happy to jump in detail and explain more about these and show you more pictures of how these books 11:51:36 would be want to basically have the readout attached to the back of the LAPD such that we can tile, an entire wall of detector, and that's as far as I can tell you in terms of my understanding of the problem we're trying to solve in that way we can avoid 11:51:52 bringing all the Alex signals away, using cables and and have to deal with all the degradation all the other issues essentially and so everything converts to digital using these aardvark chips that are running at about 13 Giga samples per second, gets 11:52:07 captured is it Arctic seven FPGA here that that just serial data from the chips come into this and some calibration circuitry on chip power fiber optical fiber readout. 11:52:20 And this is just really a bare bones kind of demonstration project using the phase 100 physics SP IR funding. 11:52:30 And there are aspects of DAQ and this have already asked, and contacted some the experts that are already on this call to help us out to flesh out what the needs are and and support us to this project so there are aspects of the DAQ that could be interesting 11:52:44 for this workshop. 11:52:46 What happens to the relationship between the chip to the, to the FPGA and also the FPGA data packaging, communicating with the backend. 11:52:53 What sort of back end, we need to talk to. 11:52:57 So, we're, we're working on this project, it's going into box, as we speak, it's parked in a dark box at the University of Hawaii. 11:53:07 And the other side of the sport is essentially the LAPD tile, and so they're taking data, here's, really, the yesterday, actually. So, really, brand new data that we can see a huge policy are coming out shooting laser, the pulse and we're going to scan 11:53:22 the tile, so that we can see how well we can discern position of the rising pulses essentially. 11:53:29 And so, and use the top few of the LGBT plus the, plus the readout board, and the goal is, once we once we figure this out and This. 11:53:39 This can become a detector element that that can just tile can be used to tile. 11:53:47 The surface area of interest in a detector. 11:53:51 So this this is also going on and so this there's a lot of synergies here between the chip side the integration side we thought, designing the chip was difficult now integrate that, and all of a sudden you have a team of hardware, you have a team of your 11:54:05 chip designers plus the hardware board level people plus Rf guys plus software Ferber mechanical heating and it just the complexity goes up dramatically and I'm lucky to say but then a phase one project we will be managed to, to make this to design the 11:54:20 sport make the sport and do some initial testing now. The plan is to move to face to. Next week is the deadline for for application to to get funding to be able to really perfect the recipe and this, and this front. 11:54:33 Now this this might look like it's just geared towards LAPD but the short answer is that a lot of the lessons learned from this can be used to change the form factor and read out more channels have any other type of detector from Silicon ma p amp tease 11:54:50 so the other side of the sport is is quite empty, there's nothing there essentially and so a little bit of routing and a little bit of characterizing can help us use reuse the sport for other types of detectors LAPD was used as a demonstration project 11:55:04 to create the speaker second time and resolution. Plus, you know, the IQ connection. 11:55:11 So, some again results from the time of separation here. We started with the switch capacitor a design. 11:55:20 And now, then be implemented this this multi bank kind of technique to be able to get to that. 11:55:36 That time this operation now that moving forward we're moving into the virtual memory essentially which has, which is the fundamental design in New York chips from HD SOC two straws and, and essentially will be able to hopefully get to the time is operation 11:55:45 And essentially, we'll be able to hopefully get to the time of operation with this on chip data reduction and feature extraction a lot easier. And also hit the the higher rates and. 11:55:53 And the goal is to get there and have the upper bound be the serial link on the chip, essentially, and so, so that we don't miss anything if if the problem is just the serial link on the chip we can just add extra serial links for the future extracted 11:56:09 data. So how can we contribute to this to this ongoing effort, especially for EIC, I must say. 11:56:19 Fernando bubbles his talk was, was really eye opening in there he mentioned a few key things about the tools and the front end, I think these need to be specified the next few years, there's a good working base here from knowledge product and also other 11:56:33 other collaborators and other vendors essentially that have been on the talk here so I think the goal is to create something that, and it might be a hybrid solution, essentially, but knowing the specifications is critical here. 11:56:48 So essentially our basic tech is shovel ready and good news is that as founder predicts that for the next 10 years the, we are using older nodes, but they seem to be staying around, and that's that's good to know, due to 50 to 130 to 65 nodes will will 11:57:04 likely be around for a while, so that's that's good to know. So knowing specifications will help us reiterate these designs, and also getting more funding to be able to reuse and integrate these designs into kind of demonstration small demonstration projects. 11:57:22 That's going to be also very helpful. 11:57:25 There is a quite a bit of knowledge base is created through significant investment from DSBIR program and so our job, or our goal is to be able to commercialize this so we can sustain ourselves and stay around and and essentially be able to help to help 11:57:40 the community here, these trips have been designed with experiment in mind so just high level of integration in there there's lots of built in, you know calibration clock memory, other aspects in all these chips that that we hope you know will will ease 11:57:58 the pain of designing experiment essentially commercial grade so obviously using latest and greatest commercial grade ed ed ed a tools for this development. 11:58:09 There's no problem and and mass production or sales of product or services. 11:58:17 We've been teaming quite a bit with scientists, and the ICP it read out and so we're in this you search you can see some of that white papers that have helped develop. 11:58:29 And so the goal is to also continue this path on the back end side and on other other sub detectors essentially. 11:58:38 We've also been creating strategic partnerships with system integrators not not at liberty to name any yet but the goal is to be able to work with bigger players that are active in the field and can potentially help us with, with their customer outreach, 11:58:54 essentially, and and create create adoption. 11:59:00 We also have a good working relationship with universal Hawaii, several national labs. Most of them are on this call, and we've. 11:59:08 In the past with they've negotiated contracts because they've done deals and so the road is paved with we have the mechanics and the dynamics in place to either send this little subcontract summer or receive a little contract. 11:59:22 And these are things that we've kind of flex muscles on before and so that that overhead aspect of it is pretty much gone. 11:59:30 Okay, so I'm about to wrap up here, in terms of summary, we talked about our digitize their electronics kind of overview aspect without getting into the details lots of publications out there that, and I'm happy to point you towards, and we'll be sending 11:59:43 more papers at for the IEEE. 2021 conference this year so I'll be happy to share a preview of that with interested parties package chips evaluation carts are available. 11:59:55 There's lots of additional testing going on every day, pandemic has has kind of been a little bit difficult for us to during the pandemic to test some of our chips. 12:00:07 And it just takes a little longer, but we've been doing that and now everybody getting back to work vaccinations and other things so we're hoping to explain that process also including radiation testing expertise on hand. 12:00:21 So we talked about those were physicists that have been trying to cater to the, to the needs of the physicists but going out of our way to design these chips, using electric electrical engineering techniques essentially so that's how we feel connected 12:00:36 to the community that are for funding SPI ours are a great mechanism to cover the, you know, extremely expensive chip development and Dallas and so can probably tell you the same thing. 12:00:50 But we're now also working on on trade studies for these initial assessment, and sometimes SPR cycle might be too long to wait for for these initial kind of feasibility studies. 12:01:02 And so that's where trade studies come in and also custom design contracts with we've had good success with several national labs to design board level software for member level integration into one package and give them either to design for them to go 12:01:18 and fabricate or actually give them the board or plus the readout that they want it so those are all on the table, essentially, next step for us is continue chip and PCB development. 12:01:34 Continue to engagement with the community, and, and, and also the cloud collaborate work and team up with other users to be able to read compound proposals and get our work funded and and also be able to cater back to the community, essentially. 12:01:50 So we do have a variety of evaluation cards available for testing constantly working on the firmware and software, either through internal r&d or through funded projects, so things are getting better day and day essentially. 12:02:15 That's it for my talk so thank you thank you for your time and I would like to thank the department energy of the science and pH up all the people that have been mentioned, the program managers and also people on the call that have been helpful for us. 12:02:22 Over the years, and written hustlers support and also have been there to help us kind of specify and flesh out and and understand the field and be able to be responsive in the field. 12:02:36 Essentially, it was Hawaii and white Technology Development Corporation, have also been been great support to us. 12:02:41 Thank you for your time and I'm happy to answer any questions. 12:02:48 Thank you, sir that was quite amazing a lot of work. And I had a question but you kind of answered it and you showed the, the work you've been doing for the LAPD. 12:03:04 And you kind of what you did you mentioned that you might not be able to mention it but it seems like this next level of integration. That's a big step right you have a company, you find another company. 12:03:17 And I don't know how it works with SPI ours but you know if you could propose with income, you know to to submit a proposal, I don't know if that's how it works. 12:03:30 If there is there any effort to go that direction when you when you team up with another with a company that makes detectors only and needs attack experienced attack expertise. 12:03:42 Yeah, that's a good question. it depends on on the context and you know the type of funding we're going after. 12:03:49 But the, we are working with the, with several actually detector companies, and some of them are the classical, you know, commercial off the shelf available devices and they're interested in electronics too and so like if it is a little newer and read 12:04:05 And so, like PPD is a little newer and read out is a real challenge, because you really need to have a forum, to be able to capture the data, and so there's not much available out there that's commercial. 12:04:16 Breathe out sort of commercial readout So, depending on the project. 12:04:23 In this specific case what we did was we propose to do he and we we had income, as a subcontractor essentially. 12:04:29 And so they got a little bit of funding to be able to spend some hours with us and kind of help us get the specifications out and at least some of their effort is paid for and so, but they also need the electronic so it's a, it's a very good relationship 12:04:44 to have here, we want to demonstrate electronics, they need to have electronics so there was a good demo project that that came together, that's that's that's a good, very good example and it's like state of the art right it's it's happening now, so that's 12:04:58 that's interesting, because I mean with the IC I don't know how it's all going to go but there's procurements there's Bessemer you know you have to submit requirements to procurement and buy these things and if there's multiple ups you could sole source 12:05:13 but there's that's down the road, I just was wondering, how is a small company you could team up with another small company and submit a proposal but I others, sounds like there's avenues to do that so. 12:05:29 Absolutely, absolutely. I think we're just kind of, you know, going out there flexing muscles for for both of us and just like really learning in in working with our company. 12:05:40 So, two or three years down the road, these will be a lot. There will be the British with the more mature they will be potentially you know catalog of products to choose from, with some, you know, data sheets that that people know what they're getting 12:05:55 and the goal is to get there, essentially, right. I have a few hands up so just go down the list john. 12:06:04 So that's a rather broad question. 12:06:06 So, when we when we wrote the reports, and when we wrote the report. He got conflicting comments about how long it takes to develop an essay. 12:06:18 So I want to ask you and the other people in the session. 12:06:22 Let's say you get funding, right, and you have a plan of what you build. 12:06:27 How long does it really take from the idea of the chip to having them in hand in quantities like 1000 chips. Right. In years, just roughly right is that my leg tool I said my like 10, probably somewhere in between. 12:06:47 Gosh, that's the real real tough, tough question. I think that your estimate range is correct. 12:06:56 And that's unfortunately the standard deviation is quite, quite high, and the reason for that is, first of all the specifications of the AC the technology the team. 12:07:05 How much reuse you can have, you know, if you've, if you've worked in that, in that certain semiconductor note before. And if you have designed sub circuits that you can now kind of cobbled together, right. 12:07:18 So that can expedite the path a lot. 12:07:22 And then how many revisions you do you envision you know what's, what's the. If you're going to need a new node now the escrow can probably also tell you about this too if you're going into you know 20 to 28 nanometers sea moss and it's a new node. 12:07:34 So your first trip is not probably not going to be a full functioning chip it might just be a test structure. 12:07:39 And so that every time you make one of these you're burning you know six months of wait time for foundry fabrication come back tested and see what's going on. 12:07:49 So, just to characterize the sub circuits that you're proposing for rev one chip essentially so it really depends. 12:07:57 This here's an example of a chip that we, you know, from start to end. It took us nine months, and this will we have to turn it on and test it and put it on the board so work is not done yet. 12:08:10 So, but if if we've designed it and we have, you know, quite a bit of confidence so we packed it it's pretty big die. Lots of things that can go wrong and it also but we felt pretty confident that okay we we have it, a good set of sub circuits that we 12:08:33 feel that are operating. Maybe not top to performance but at this of the function so that we can get revision one get some function out of this. And so, but it will certainly need aeration you'll definitely need a version 234, and that's what we're hoping 12:08:48 to use the funding for so another couple of years to bring it to maturity, essentially, but we've had chips that took you know, year, year and a half to get the revision one out the door, essentially. 12:08:54 Okay, thank you. 12:08:58 Jen. 12:09:02 for a song to echo Chris point studies where we're impressive set of work. And so, um, My question is on the particular technical side that. So we place the slides. 12:09:19 At the end though very fast timing. It will also have an application because hopefully it comes by system, as well as for the icy like rich system, which can also provide the timing to become part of the key ideas to. 12:09:38 So my question is, could you also can decide to come and down the details of it also coming down the timing distribution system is precision, and it can also be a very important component of the vertical integration to not only have precise fast the tiger 12:09:47 but also has precise timing system. And, and how, and the data also need to be Vertical, Vertical integrated to the like, so two o'clock. So I'm just wondering, could you also introduce some data from two. 12:10:00 That's excellent point essentially. In fact this was this work was funded under the car topic. 12:10:07 Timing tools for because second electronic tools for because they can timing essentially and so when you're when you're going there. If you're, it's not just about sampling fast and sampling accurate but also it's being synchronized with the whole system 12:10:21 and every front end digitize you must get the same clock or a good copy of it essentially so I think this is an exercise for us to kind of test it in the board level. 12:10:32 And so we do have you know cloud chip distributions and on chip onboard clock generation essentially here, that, that are quite accurate so I think board level will be in good shape but the next step is also for us to tile or have two three of these operating 12:10:50 you know next to each other, essentially and and see how well we can work in a distributed system. So, those are their design constraints in mind, but obviously we have to learn to walk before we can run, but absolutely valid points Yeah, I'll be very 12:11:16 interesting to follow up and learn about the lack of work very much 12:11:14 looking down the list I see no other, and 12:11:19 thank you once again these are. 12:11:22 And you see that takes us to the end of the session. So thanks for. Thanks for having me. Yeah. 13:59:22 Relax. 14:00:28 Get Started. 14:00:36 I started the recording. 14:00:39 Okay, sounds good. So, I think there's still many posts to joining. So maybe less less better study the wages, coaching session. We've recently. 14:00:50 So today we just have a few slides to motivate today's session. And so starting with are still on the table. 14:01:00 So that's just fun to cross section of loan we can immediately see yeah I see is very very different, either. 14:01:10 Either operating today in the week. 14:01:13 And we automated a different section that needed to the yes each category is much, much lower. 14:01:20 And so that's in the collision signal is very different. One example from the case with the especially in the case of. 14:01:31 than it is now something like, I will see have been done then we can help you. 14:01:44 And in addition to that, I think that yes events are also very precious and also very diverse ology, and does actually also keep us motivation to now to put a lot harder hardware trigger the data processing the data, and to try to repeat everything. 14:02:04 And also, many of our yes the moment it's also very systematically been that's also try to pushing us while to reverse the system so that to walk the CES maintenance Constantine, 14:02:35 in the house and that's a really good week isolation from this frequency no lungs only other very enjoyable and modern computing centers, as well as ISIS. 14:02:35 And so that we can trust the system of equations. 14:02:39 And that is also all doesn't matter to me obviously people need to figure out how to throw out an interesting Kiki creations, those funds are actually different backgrounds, as well awesome phones as high as me I see, I see every creation will be interesting 14:03:06 Obviously as discussing the early part of the workshop is not, it's not the whole story yet. So, what is still quite uncertainty. 14:03:15 Well the the background and the knowledge is the in particular one so big, uncertainty is a scene from Taiwan, as many casinos and the machine has experienced. 14:03:28 And so that's what is your ongoing as we reflect. 14:03:45 This is the last summer, during the car simulation to show that we will be seeing lots of TV for example in the map space the second tracker. And though they will be high, high, high enough energy to fly out of being high, and an interactive in silicon, 14:04:01 you don't know enough energy that's it was hard enough. And still, not only is a certain Hacker News expensive. Many like in this case of timber wouldn't see some of them, and also for the backboard going higher we also see practices substantial blacks 14:04:16 and the Luckily, I think, subbing bumped it out of balance the mental, as well as students that started this very seriously. And those guys. 14:04:40 Joining post machine and optimize machine. So that is where I'm going with my screen. And the ones that were used for tuning is just coming soon side. 14:04:50 and then still ongoing that we've been working on something quite elbow like comparable to the inclusions in 2018, make your higher. 14:05:01 So this, this sounds like you've asked you will be ongoing, and besides lot of these designs to ongoing. 14:05:11 So therefore, that was our computing, that's going to be handling such real time. 14:05:18 And as thrown under, under attack by one into the car also by Fernando, Fernando yesterday, we try to be conservative the instance of designing a pipeline so that it digitize it and degrees outside. 14:05:53 Second, our second before to go into a farm, and on the on the on the step 14:05:49 to imagine a senior well the first thing out processes. 14:06:11 And the ones both IPG, and also more computing technology company will reduce the noise, noise of the data, and various strategies, and the data within what we put into storage will be dominated by it. 14:06:18 So therefore, it's not confirmed design sorry for the recording is going to need to deal with no background noise. 14:06:29 So cool the icons designed to know now. 14:06:34 And so cool the icons designed to deliver. Now, and the meanwhile, as just to follow is in many, many of the computing will show up on both edge computing, and the other way to computing in commercial off the shelf. 14:06:47 You might say hi to reliable. 14:06:51 Actually, I believe, reduces the streaming data so that he will be comically fit into the storage. 14:06:59 And so it's a broad topic for instance, for example, traditionally what we think about triggering, it can be used to do that just simply ask the question. 14:07:11 So that would mean sell pizza some form of paper. 14:07:15 And so beyond that, as we discussing many of the workshop in the past. 14:07:20 Yes, he did, yes he's given us only 200 handle many more traditional roles that post online 14:07:30 education, and also initial production so that we can access it as possible, so that it's more traditional, but nevertheless in this workshop. Let's first take our attention into the data sources, the essential role of 19 stimulator speaking to this pipeline 14:07:47 the essential role of national student leaders thinking for this pipeline will be can be delivered. 14:07:50 So that becomes a solid ground of this session, that's starting the last three sessions we have conflicts electronics. 14:08:17 So we want to have the extension to have low paid one and low nice salaries and updates as many strategy we can, we can reduce the data. 14:08:14 Streaming system, starting from the morning talks, man. He has been reaching interest nation, and the zo suppression. So that's a very fruitful way so that we don't send out all that interesting. 14:08:30 And the flooding data is about technical like effects function like our first topic. 14:08:53 Examples mid to high. And so they can be done IPG as an outpost, as well as possibly amazing. Our second new technology, techniques, so that would be very effective way to reduce went home schooling, etc. 14:08:58 and we can always do local and global triggering rivalry, and scrape data 72 about emotional learning based on my selection 90 days. 14:09:10 And the phone. 14:09:11 Once we have the data we can always ask you the system to view how to noise, and the demo compression, as many segments already having a compression algorithm built into the system. 14:09:23 And we are also looking at the machine learning algorithm to do that too in a single right now is under pressure. And then we have a computer scientist equal to about this workshop. 14:09:35 And the last but not least is that can we always do high level object reconstruction and data management, well, and the example that I at least autobahn online, offline. 14:09:47 Does contraction is even bigger view that reduce data. 14:09:52 We should need lots of computing resources, and also last East one, and we also glad to help. After tools fun to talk about us from without them the data management operation national lab with was also number. 14:10:07 So that becomes of the backbone story of our session. And so that's how the session started, and we have build in lots of discussion time so that we would be able to do. 14:10:19 So I would encourage everyone to participate. So question and have discussions and have less time for that. And the sideways market, market space. 14:10:33 So first of all, great summary Jen, and I'm very excited for for this session I really want to emphasize on one point. You mentioned in your talk, but did not really highlight, even though I think it's maybe a nice to meet the most important point of 14:10:45 software for streaming readout and streaming without approach itself. And that is an opportunity to accelerate science. Yeah, the faster we got what you call yet the end higher level objects out to our community to experimentalists, and also to theorist, 14:11:04 I the foster our research groups will, the goal and I think that that alone, apart from all the things that you mentioned rightfully should be should be a main driver for, for real time. 14:11:23 Alignment real time calibration real time the construction and real time, analysis, add upcoming experiments at future facilities and pasted onto paper close I fully agree, and then under, so come is facing some very important topic of of the computing 14:11:43 system. And I think it's also well covered in the past workshops and so I'm not trying to you know it's basically, it says, Now to become, not a particular topic as it can be folks of the session, but I think it's extremely important. 14:11:58 And I also want to echo your point that to get a science out so that we not only need to do the accounting right but as a calibration and systematically control our be also very right. 14:12:11 And so as he as he knows that too many time is spent on just quantifying system and Helsinki I've seen it will be even also at EIC stage. So that will be many things to be many work to come in the coming years, to be able to deliver that both rivalry 14:12:29 and as I've explained it. Explain to us as fast as fast as reliable as possible. So I just want to say I just want to echo your point. 14:12:40 It's thanks Marcus. 14:12:50 Okay, please do have a 14:12:55 question to just have this quick introduction. So let's go to our first topic so Chris would like to share our screen please, I will stop. 14:13:12 Can you hear me. 14:13:16 Yes. 14:13:16 Good. Okay, so, Here, let me just share my screen. 14:13:24 Okay. 14:13:29 And you can see my screen 14:13:32 is coming, and I can see it now. 14:13:37 Okay, so for those of you who are or triple zooming I learned today to do executive summaries at the nuclear science advocacy day. So, here you go. 14:13:49 So what I'm going to present the sliding least squares recursive piece was polynomial algorithm, which can do fully covariance nonlinear at least squares template pulse fitting in real time when streaming data and we'll get to that in the talk, but what 14:14:04 that means. 14:14:06 This is based on a, an archive paper that I just realized I didn't put in the reference. 14:14:11 But I'll put it in the chat. 14:14:14 So, as far as that goes for streaming readout. This will enable the application so sophisticated level one triggering logic. 14:14:23 For example, a threshold triggers or energy something faster trust to track cluster triggers, and we can get the very best. 14:14:31 Basically offline quality analysis, right in the first stage of your friend and electronics. 14:14:39 And I think more importantly, I want to show that with, you know, very, you can build a very general and extensible building blocks for a wide range of applications that that could be used for all sorts of very fast, kind of, Pause failing at at the, 14:14:56 at the very low level of the deck systems. 14:15:00 very low level of the deck systems. And so, this talk will be a little bit different instead of summarizing a broad range of progress I'm going to focus on explaining this particular algorithm, and I share it's a little elementary for this audience but 14:15:12 what I really want to emphasize is the power you can achieve with very simple building blocks. So first I'll just say a couple of words about the nav experiment, which was the background for all of this development. 14:15:25 And also I want to just get to speed with the standard tools that digital spectroscopy, in particular the trapezoid filter and its various pieces. 14:15:34 And then we can go on to the sliding least squares filter. As an impulse infinite, a finite impulse response filter and how to implement that in FPGA with a very generic pieces polynomial filter. 14:15:50 Okay, so just a few words but the Nam experiment. 14:15:54 It was done to study neutron to Kate not the average rate. Can you just let me put my laser pointer, can you see that now. 14:16:03 So here's the average decay rate for the neutron, but there's also correlations that depend on on say the outgoing will mentor or energies, or spin value the ongoing or outgoing particles. 14:16:20 And 14:16:20 by apparently violation, these correlations are proportional to the axial coupling constant. 14:16:27 And with together with a lifetime, you can then extract the basically the ck matrix vd at least one element. 14:16:39 And you can extract the electron neutrino angle for one of these correlations just by measuring the energy and the momentum of a proton. However, these protons have less than a KV of energy so we can't detect them. 14:16:52 Instead, we have to learn to to analyze the momentum and just look for their time of flight. And we do that in five meter long flight path in the superconducting sunlight. 14:17:02 And you can see it. This spectrometer being installed at Oakridge at the SMS. 14:17:10 And each and we use silicon detector is an implanted to the segmented with by the hundred 27 segments. 14:17:21 And the reason we do that is so that we can distinguish are real coincidences from accidentals because the electrons and protons should be guys along the same magnetic field line. 14:17:32 And not that the top detector for the time of flight is that negative 30 kilowatts so that we can accelerate the proton and actually see it. 14:17:43 Okay, so our data acquisition system 14:17:49 to see us is very small compared to USC, or so Phoenix stuff physics, but I see it as like a testing bed for for for technology we can put into the icy. 14:18:01 And so as far as the acquisition system, about seven years ago and we started on this project, the one of the only commercial available systems where you could actually program you an FPGA very simply was National Instruments. 14:18:17 Essentially it had you program it and love you, it has a on one side there's the block which is the HTC on the other side there's a block in firmware, or just in the lab you for the DMA FIFO, there's another DRAM controller and you basically put anything 14:18:36 you want in between using their pre, pre packaged digital algorithms and things. 14:18:45 And so, we, we have a crate each it's detector has a crate with her and 20 channels. And those are connected by fiber to controller and readout computers so that we don't have to worry about high voltage. 14:19:00 And I also like to point out that. In the meantime, we've found these basically hobbyist sports for about $300 for two channels and nice thing is these also come with open source software and firmware. 14:19:14 And so we've been able to play around with these boards and put our firmware right on on these sports as well. 14:19:20 And it's nice now that other companies like kind also have open source firmware available, so that we can apply these two to a broad range of different hardware. 14:19:35 So we took a little different approach to a readout instead of triggering on an individual event by first and then validating before I read out, we decided to do kind of a more streaming readout format, and you may say well this isn't streaming readout 14:19:50 but I'm going to make the argument that it's quite closely related. So, basically the waveform is continuously buffered in a giant being buffer, while simultaneously going through energy and timing filters and discrimination. 14:20:04 And we have avoid the local level to trigger logic by sending a lightweight trigger stream straight to CPU, that collects all of the events from every detector and can then make high level trigger logic and decided exactly which elements of the day that 14:20:23 which are being continuously read out to, to basically filter out and to pass along for for recording. 14:20:31 Okay, so in. 14:20:52 And note that since, since all of the data are being buffered all of the time. The trigger logic can can really decide. Any, any sort of schema of data to read out, or even data which wasn't triggered on. 14:20:49 And so that's the sense in which it's streaming is that where we have all of the data available. And we can make sessions basically to down select that data into any subset we want. 14:21:01 Before we pass it on. And I think that's also applicable to larger projects like the IC. 14:21:07 And I'd also like to show next that we can actually put a completely generic filter in in the FPGA that's configurable at runtime, just like by configuring registers. 14:21:24 And thus, all of the trigger logic, can be configured in CPU, which is debugging is much easier. 14:21:32 And that, that's kind of along the same lines that surgery was talking about for class 12 yesterday. 14:21:41 So before describing the list squares ricotta recursive pieces polynomial I'd like to describe the standard trapezoid filter which, in terms of the basic building blocks because many of the pieces are the same. 14:21:53 And essentially a trapezoid average of averages out the charge on the peak of a tell pulse, and minus the exponential decay background on this side. 14:22:05 And the nice thing about it is that it also can account for extended rice times, as the charge collected. 14:22:21 It's called ballistic deficit. And so basically it averages out to a nice flat top where you can read off an average energy with a lot more precision and this is essentially the state of the art for spectroscopy. 14:22:26 Many of the systems we tested that was their basic way they did it for the building blocks that we have basic arithmetic. 14:22:36 Adding can be done in the fabric but the Xilinx FPGA is have a special DSP slices for multiplying efficiently and including multiply accumulates which you see in like products for instance, or convolutions or fi or filters. 14:22:54 And then after that there's just the unit impulse function, which of course slides across the function, and since we're picking off the value right here, and recording it here. 14:23:05 You can see this delayed by that much. 14:23:08 And we implement that as either shift registers, or just buffers. 14:23:14 And then the step function, and I should note that the impulse function can be treated, just as a function as a basis function by itself, or if you can't relate with it, then it becomes a filter and it becomes like the delay or the, the identity filter, 14:23:28 if there's zero delay. 14:23:30 And the other function we use is the step function which represents the amount of charge collected in the detector, for instance, but also as a filter, it becomes integration. 14:23:41 So, as it comes across the step, you can see that it basically puts the integral of this constant function. 14:23:51 And that we do with an accumulator every clock cycle, this shift register here records the running total, which has added to the new, new values coming in. 14:24:02 And along with the properties of community properties so that we can treat these functions either as filters or as functions. 14:24:19 And the associate property of competition that means that we can build up we can break a complicated filter function into us into to accomplish and have simpler functions, and then build up that competition and stages, and we'll see that, first of all, 14:24:29 the boxcar filter. 14:24:29 So, if this is, this is your input signal from we're ignoring the bleed resistor from now. 14:24:38 The most common operation would be just to average, and samples and output it. And that's done with the box car filter. And you can see get this rising edge as the filter is sliding across the, the edge of your charge, or actually you could think of the, 14:24:54 the data sliding across the filter, I guess. 14:25:01 And so the time the rice time here is just the same thing as the amount of time we're averaging or pulses over. 14:25:09 And, of course, that's just to accumulate as one to add the new point on the right on the, on the front edge, and another one to subtract off the point on the falling edge as a filter slides alone. 14:25:24 So, we can also take the height of the pulse minus our baseline by just doing another delay and subtract. 14:25:33 And that has the trapezoid shape. And you can see the flat top here is essentially the amount of time. The gap between where we're averaging the top, and we're averaging the baseline. 14:25:46 And that can allow for, say, a rice time in your signal, or whatever. 14:25:51 Okay, so that would be a trap side filter except for we forgot the bleed resistors so we can go back in and put that in by noting that the integral of exponential decay is an exponential rise ups. 14:26:05 And so if we add the decay with an integrated version. 14:26:10 Then we get back to our step function which we then know how to average over. 14:26:16 And so here's the here's the filter that straightens out the exponential decay. 14:26:23 And the result is course the step function, and then we have a double boxcar that then gives us the output of a trap aside, and by competition of course we can combine these two filters together to get. 14:26:39 Well first of all we can put both of those filters together and series, and we can involve them to get the normal response function for or impulse response function for troubleshooting filter. 14:26:51 And so, so Nami just say the trips like involved with your tail post gives you. I'm sorry the filter that filter gives you your episode. 14:27:01 Okay. So that's a quick overview of the travel side filter itself. 14:27:05 Now I'd like to. 14:27:08 And here's just an example of it sliding over a tail pulse. And, of course, in practice, we have very noisy signals, and you can see that the translates much more stable and gives you much better. 14:27:20 Energy resolution, even in background noises. 14:27:25 Ok so now I'd like to take these pieces all together and put them into a actually do a template fitting of waveforms in real time. And first of all, we're just going to do review linear least squares fitting. 14:27:38 So, if you have a piece of data you want to fit to a bunch of basis functions, you put them together as the columns and then design matrix. 14:27:47 And then you can basically by inverting, or taking this to the inverse of that design matrix and multiply it by original waveforms. 14:27:57 You can you can get the coefficients, or the, the, the linear combinations of each of your basis functions present in the waveform day that you're trying to fit. 14:28:13 Okay. And note that each column here is multiplied by a matrix, each of those is essentially just a product, and there's. In this case, five of them. 14:28:22 So, what we'd like to do is add one nonlinear parameter which is the t not. 14:28:28 And so the way to do that is set this basically to do the same fit, but just to do it once for every value of time, and others take that fit, and then just slide it along the way function. 14:28:43 Okay. And so what we end up with and to get you to the coefficients we end up with a sliding product where we take our these response functions, the rows of our, our, our pseudo inverse matrix. 14:28:57 And each of those rows basically slides along, if you do a sliding product, and that that value just slides along the, the waveform. 14:29:09 And that's, that is the convolution. So, we can get each of our fit coefficients, as a function of time. If we take this pseudo inverse response match. 14:29:35 Design matrix and combine it with original wave function, or with our with our. No, sorry, waveform. 14:29:30 Okay, so. 14:29:35 So, essentially what we have to do then is five convolutions, and we'll get five new waveforms. Each one is the value of one of these coefficients for one of our template functions or basis functions as a function of t not. 14:29:50 And you can see right here, the blue curve here is the value of the amplitude of the post we're looking for. You can see it sharply peaks at this point which is the correct t not. 14:30:01 At the same time that each of the noises also vanishes. 14:30:04 Okay. And if we show that we show a fit just too early, you can see we don't get a good fit, nor do we if we do it a little bit later. 14:30:14 Okay so finally, we can find the chi square it now as a function of tea not as it's sliding along. 14:30:21 And you can see it get terrible chi square is except for where you're very close. 14:30:26 And, in fact, you can read out the chi square in an FPGA friendly way by expanding the normal different squared, to just the difference of some. 14:30:38 And this can be implemented as competitions. 14:30:42 Essentially, this is a piece of ice, multiplication, the way for. I'm sorry. Yeah, a piece was squaring it a point wise squaring and then combining that with the boxcar. 14:30:53 And the second one here, remember that our fit coefficients are just like in this case it's a five vector. And so you can do a very small sandwiches of this vector with a constant matrix here, and subtracting these two large numbers. 14:31:08 And what you end up with is a chi square that you can actually develop in real time. 14:31:15 And so then you just need an algorithm to search for the, the minimum of this peak. And then you can you essentially have a full least squares, chi square and fit. 14:31:29 Now I didn't talk about putting in the covariance matrix, but essentially doesn't change everything anything, the only the only difference now is that your chi square and instead of having y transpose y with have, there's a metric in between which is 14:31:49 covariance, the inverse of your covariance matrix. But it's essentially the same thing. 14:31:52 Okay, so, I guess, so that, that concludes the sliding least squares filter. In other words, you do a linear least squares fit, and then you slide it along your way function to get a one parameter chi square function that you can minimize to get a full 14:32:11 nonlinear discourse fit. And another thing you can actually do is for instance if you want to fit the nonlinear parameter like the exponential decay constant, you can just come up with a bunch of templates that have different decay constants, and then 14:32:24 interpolate between them. 14:32:26 Okay, now the problem is the final problem is that we have, say, a large. 14:32:32 And in our case we have a waveform, like our. 14:32:52 The, the templates are up to two or 3000 samples long, and to do products on a Jeep, just the standard way you'd have to have 1000 multiplies and adds that would completely exhaust the resources on an FPGA. 14:32:51 So, and the end I wanted to kind of put the pieces together to create to show you how to do a very, very generic convolutions on an FPGA so that we could implement the sliding way squares fit in with, with very conservative resources. 14:33:11 So the key is, is what we already talked about is that we can if we, we can treat a step function, or accumulator as an integrator. 14:33:22 And of course if we integrate that across a delta function we just get the same thing. And though that we have, we're subtracting off the off the integral at the end so that we have a piece wise defined function so this is. 14:33:37 So another story for delta function we can get a piece, why is constant function. 14:33:42 And then we can integrate that again, to get the linear function against the tracking of the end to keep it. Keep it. Find that length, and so forth. And by adding all these pieces together we can get with just a few multiplies, we can get a piece wise, 14:34:02 I should say we can get a convolution with any polynomial. 14:34:06 Okay, so, no, I said the polynomial but actually it's the it's the this diagonals on Pascal's triangle for the exact same reason that Pascal's triangle works. 14:34:17 So here's the constant function this is the linear function. The next function is almost a quadratic, but it also has the linear term in that it's one half x squared plus x, and so forth. 14:34:28 So you get almost a polynomial, except for it's in this weird podcaster basis, but it's not too hard. 14:34:39 Here's the implementation, you can just see a bunch of accumulators. 14:34:44 Yeah, a bunch of accumulators, and then each of these polynomials is multiplied by a coefficient, and add up to get the complete polynomial response. 14:34:52 So, we can do is you can take any response function that you want. 14:34:59 s&c any impulse response function, and approximate it with piece wise polynomials. And then each of those polynomials. Once you fit for those coefficients. 14:35:11 You can, in, in a you know the one plus t plus t squared basis you can convert that to the basis that we get most naturally in our, in our filters. 14:35:23 And then that becomes these become the coefficients that you implement your digital logic with. 14:35:31 And so, this is a response for sliding least squares filter approximately with polynomials, and then stuff that quite efficiently on an FPGA. 14:35:42 Okay. Am I doing for time. 14:35:45 That's pretty much the end of the talk, I just. 14:35:49 So, in summary, I just wanted to say that, by combining these very simple blocks, and you can create very powerful digital filters into the streaming remote readout system that can be used to form, versatile trigger logic or. 14:36:05 And all of this can be done in real time at full digitization speed, and just 14:36:12 see here. 14:36:14 Just as playing around my student. 14:36:18 Wait. 14:36:20 Well anyways. My students say he developed a trigger logic that was able to turn the hotel post into a UK logo. So, the point is that this is very versatile and you can pretty much build any sort of trigger logic out of it. 14:36:39 That's all. Any questions. 14:36:43 Thanks, Chris. 14:36:46 I really appreciate those slices as a way the handwriting on it. 14:36:52 So that's possible questions. 14:37:03 Me, just click on the link, there's a feeling of mastermind digitizing speed you can support right now. 14:37:10 Yeah, it depends on the depends on your FPGA you have so right now we're using a 250 mega, mega sample per second digitizes. 14:37:21 And we actually for for our trapezoid, we actually average pairs of points so we're our filter is running at 125 mega megahertz. 14:37:34 But I think with, with more powerful FPGA he could speed that up a lot, 14:37:41 especially a few of our creation go to example, especially for example, when we, when we want to be sensitive to finance, so that's, that's why my kids curiosity. 14:37:52 And then if you want to, like, If you want to have lots of channels with, with slower resolution you can multiplex a single multipliers to basically, you know, do flip flop between when when multiplication and the other. 14:38:08 On the other hand if you want to do really fast you can multiply x the other way and have multiple, multiple multiplies all performing, you know, basically have one to one sample when do the next step and so forth so you cannot. 14:38:23 I think by doing that you could speed up to get the full Giga sample per second, and resolution in your fitting. 14:38:31 That sounds exactly how many opens a deca sample in DC was feel. 14:38:36 Yeah, I must practice. 14:38:40 Okay, so I could again so from this session we build into our last discussion time so if you have questions or ideas, please feel free to shout. 14:38:54 Maybe then hopefully one left my last question else might Welcome my most persistent question is, again, the semi golden face. And I hope that we'll be able to have an illustration of that too. 14:39:11 Oh. 14:39:10 Oh yeah, I figured out the finally supposed to be a winning basketball winning logo but not not recently. 14:39:19 So yes, if you can send me away farms I'd be very glad to try out these filters on them. 14:39:26 As I understand your, your waveforms have just 10s of data points so there's not nearly as much noise to filter out using at least squares fitting as. 14:39:38 That's what we're what we're used to in that, but I think it'd be very interesting to still try it out and see how well it can do, 14:39:47 etc. 14:39:48 etc. So, 14:39:51 that would be great. 14:39:52 Yeah. Oh testing we have a Trump center for Chinese no trigger that we can follow up offline. 14:40:01 Great. 14:40:02 Okay, cool. 14:40:03 Okay thanks Chris I'm sold on any other further questions that you sent to Christmas. 14:40:14 I'm glad you're glad the animation works. 14:40:17 And that's again very much. 14:40:21 Okay, so it says no money to increase. So let's move to the next one from some people would you like to share your screen. 14:40:34 Sure. 14:40:34 Can you see my screen. 14:40:37 I can see some nods. Good, nice, nice bedroom, yes wasn't very well. Yeah, please. 14:40:41 Can I go ahead. Okay, thanks. 14:40:44 Thanks for the invitation. Jen, so machine learning on a six foot streaming a disease is a new project initiative at instrumentation division at Brookhaven National Laboratory in collaboration with Computational Science Initiative division and the physics 14:40:58 division 14:41:04 A brief outline of the of my talk is, I'd like to start with current signal processing chain, which is quite common for most of the front end electronics for any of the detector and moving forward with what exactly that we would like to propose for future 14:41:22 readout a six for scientific applications, and our prime objective is machine learning for the basics for French and electronics so we are investigating several machine learning algorithms like multi layer perception or convolution neural networks for 14:41:39 peak finding and and and optimization so some of the results. Preliminary results. We will show, I'd like to show here, and move, and going forward our approach for hardware design be digital or analog approach of vector matrix multiplication, and and 14:42:01 into a new devices like memories. So currently for most of the front end electronics, the data that our data is streamed out, and the processing is done offline, either using FPGA GPU DSP or accelerators at this point. 14:42:16 So, we were thinking it's it's it's it's it's a high time that we introduce some smartness at the front end electronics itself where we have the source of the data. 14:42:26 And that is, obviously, edge computing, or towards edge computing. 14:42:31 Our approach for such a basic realization will be a co design one, because, machine learning algorithms are done completely on the software side, and we have limitations in terms of power area on the front end electronics so it should be a hardware and 14:42:46 software co design approach with with good interaction, and then what's what what's the inference accuracy that can be achieved. As we are looking into a new devices like memory students for looking into beyond see most approach. 14:43:01 So this will be also a device on circuit co design approach. So immediate applications in that we figured out could be for the demonizing for the processing of this waveforms, especially the digital peak finding and also the pixel integrated circuits 14:43:19 for 2d and 3d spatial resolution improvements. And then for the bed scanners, for the depth of light interaction and and also the data concentrators where we have a good amount of data from significant portion of front end electronics and probably going 14:43:33 with machine learning algorithm to the concentrator level you could always do some event reconstruction. 14:43:41 So this slide summarizes the conventional mixed signal readouts system starting from the detectors. We have charged sensitive amplifiers followed by shapers, and then discriminated and and also the pic detectors. 14:43:57 So the whole objective is to find the charge that is deposited on the sensor. So, so, one of which is to find the peaks of this waveforms. So currently we have a pig detector that does the sampling of this peak values, and we have an on demand ADC so 14:44:15 the CDC is not continuously running and it only runs. Whenever the capacity of the pig detector charges to the peak values and and and and and it's completely analog side, if you look into till the ADC point and and we would like to restrict out of this 14:44:34 is a trend from the industry's to to restrict analog to the front end up to the anti aliasing filter of a disease. And then we have the data transmission. 14:44:44 So, here instead of on demand ADC, we would like to propose a continuous sample ladies here, generating a waveform snippets, and this way from Slipknot snippets can be passed through a DSP on the on the chip itself, and DSP could be simple simplify our 14:45:04 filter doing the job, or a neural network but for our applications, we are targeting the DSP is a neural network. 14:45:13 And the final goal is to find the peak detection peak finding of the sample waveforms and the time of arrival, 14:45:20 and around on this slide, you can also see how the sensor response to this is from delegates and also the ideal and console responses, how it looks. 14:45:33 So this is, this is summarizing more. 14:45:36 What we would like to do is we have a waveform and that can be sampled. And you, and once we have the samples offline, you can either do fitting. 14:45:46 But we were thinking. 14:45:49 As you can see on the left of this slide that the responses are not ideal, we have noise from the sensor as well as from the front and electronics itself, and we are trying to do, can, can we design or come up with a neural network that can learn each 14:46:04 channel shape shape, and do the convolution in general case. 14:46:11 So to achieve this. This to achieve these goals, we started looking into basic neural networks like multi layer perception and also the convolution neural network, but but we have optimizations clearly targeting for a six so we have to optimize in terms 14:46:30 of area that is the overall or number of neurons on each of the hidden layers. As you can see, then memory is also a constraint or bottleneck. So, we will explore the pruning algorithms for reducing the weights and and ADC design complexity, we want to 14:46:57 it. So that is to go with less number of samples, this is again optimization problem on the software and call sub sampling rates. So we have, we have a software framework from taking the sensor sensor response and also the chain of front end electronics, 14:47:08 and we have. So that will be acting as a ground truth for us. And we have more than 10,000 snippets at this point but we would like to increase the data sets, and each waveform snippet is 3400 points with resolution of 10. 14:47:26 seconds. And we have split this way forms, as in for training, and as well as for the validation. So 80% was used for training and 10% for validation and 10% for testing. 14:47:37 And you can see both the waveforms here the impulse response and the sensor response. 14:47:44 So the first optimization who was looking into what could be the number of neurons in each layer, whether it could be three layers four layers or or. We know we started optimizing this number of neurons and and and and finding the mean absolute error 14:48:00 with the ground truth how the peak peak values are. And as you can see, the absolute other goes down. 14:48:08 As we increase the number of neurons and beyond, beyond certain level the data starts increasing and and and this is a point where we would like to explore further. 14:48:18 This is optimal neural network and we don't want to go further. So, this is the first step of optimizations with machine learning algorithms, and then this slide summarizes the sub sampling rates that we are talking about can we figure out only a few 14:48:36 samples on on the waveforms that are coming, and do our inferences through machine learning and and what could be the accuracy level. So this basically reduces the complexity on the ADC part, as you can see that I am on the left side of this slide, as 14:49:00 we are increasing the sampling rate that the number of input data points are going down. That is the sample rate for we are taking one sample for each of the original 200 points on the x axis as you see, that is wearing that that sampling rate. 14:49:13 And what's the influencing accuracy in terms of mean absolute error. So, again, at this point, or whatever, we got the neural network from the previous previous stage, we try to analyze this sub sampling and analyze the sub sampling portion on on on that 14:49:32 selected neural network. So this this preliminary results for was for the machine learning for the multi layer perception, but moving forward we're also exploring the convolution neural network to. 14:49:45 At this point I didn't include the results for CNN, but we also have a good results from convolution neural network. 14:49:53 And once we have the machine learning algorithm in place. 14:49:57 How do we realize the hardware and hardware when I'm talking about could be FPGA, but, but, final with with final objective towards a six because we know that FPGA has has a bottleneck in terms of resources. 14:50:14 So our machine learning algorithms, as you can see is developing a framework like pytorch, which mostly in Python, and then we need an high level synthesis tools that does a translation from Python to an RTL level. 14:50:28 And once we have this audio level, you can go with a conventional semi custom visual design approach mapping to standard cells and choose and process. 14:50:37 Or you can also map it to the FPGA is that it's purely a digital approach of implementation called one human architectural base, but with a trend towards, we want to look into a non one human architecture also as machine learning algorithms are these 14:50:55 neurons, does this multiplication and accumulate operations in terms of millions of operations. 14:51:03 We would like to come up with in memory computing approach as you can see on the slide on the right. 14:51:10 That does Mac on the does the Mac operation in a new one cycle itself. 14:51:18 So, we will be investigating. Both these approaches for the final basic implementation and and it's, we want to look into new emerging technology devices like memory storage. 14:51:34 So to give you a brief idea of what this memory serves is I try to summarize here, the fundamentals acute elements which which are very commonly known for everyone the resistor, capacitor and inductor and and the relation between the voltage magnetic 14:51:48 flux and the current and the charge. 14:51:51 So, so, the that it so that we know all the relations between these parameters, but it's it's it's it's a flux and the charge. 14:52:02 There was no relation to some extent, and and and and what could be a element that connects these two parameters and and this is this was coined as memories just by Leon shoe in 1971. 14:52:18 But still, it was only at the theoretical level, but in terms of realizing or fabrication, it was just a decade ago on by HP, they came up with this device fabrication and now it's, it's quite popular. 14:52:34 Why we want to look into this memory stirs because this is compatible with the existing seamless process, as it involves only a metal insulator or metal layers which which can be ideally grown on the top middle layer part and you can have still you can 14:52:51 still have the CMOS devices underneath and and we know that machine learning algorithms requires a lot of memory. 14:53:09 And this is also radiation harder. And because most of the applications. We target our heart, have a huge total lionizing those and and and and and you can also degrade CMOS and members to layer as you can see, so this is too late to look into beyond 14:53:26 more. 14:53:30 So, once we have. Once we have the device we should have a simulation model for building circuits so so the first obvious step is building very login models and and this slide summarizes how the characteristics of the memories are and where we are. 14:53:50 So as you can see on the right you have any students who Sue, and the memories to sell two states the low resistance tests and high resistance states. 14:53:58 So here. 14:53:59 When we go with memories just the data is stored in terms of register values, basically, it's not voltage of current so low low resistance state you and high resistance it represents a logical levels. 14:54:13 And the most common building block when when we go with this non volatile memories is the crossbar, a race that helps you to that can be used as as as conventional memory, like like an ashram, or you can also use in memory computing were in memory computing 14:54:35 for a vector matrix multiplication is on the fly. 14:54:39 So we are planning to submit our first memories to crossbar at a later this spring. 14:54:47 And this is part of an LED initiative and I thank the instrumentation division and computer science science initiate two divisions for being part of it. 14:54:58 And we also have external partners like University of South Florida and we are happy to collaborate with any other universities who are interested to work in this direction. 14:55:10 And that's it for me. Thank you. Jen, and. And if there are any questions or comments, maybe I'm open to listen and learn. Thank you. 14:55:21 So just want to echo again that we have last time reserved for discussion. So any questions, please 14:55:34 let me use Optimus you to go to, to kick out on hardware expertise with only a question also had this morning. So, it takes time to develop a while we are trying to build the system so early 2013. 14:55:52 So, my question would be, so some device like them sister. 14:55:57 Using. 14:56:16 How long will be the development cycle. In essence of concepts, as well. Right now, was instrument about device and assuming now involves a few cycle prototyping, as well as just a sample just just just runs on before it can become stable product right. 14:56:22 right. So, how many years. Yeah, thanks thanks Jen so I can give you some insight. So we are working with partners now in this stage, and and and and you other time objective is large scale, a six right it's not just a single memory storage device. 14:56:39 We will have millions of memories just sitting on the top layers, along with our CMOS layer, though we are identifying the fab partners who can grow memories to us at at large scale. 14:56:51 And now, once that gets free so ideally, as the processes already there. 14:56:58 So I'm starting now with the with the first class Bharara design. So, in six months we should have a first crossbar array. And then the objective is to map machine learning algorithms to cross boundaries. 14:57:14 So at this point. 14:57:16 By the end of the year we should have some 14:57:20 cheap on prototype being tested for you, having memories just for memories in through costs batteries so this is a timeline I can say but once the med. 14:57:31 So, so, so the project also deals with from the methodology, like the machine learning algorithms are in Python. And then we want to go with the conventional route of RTL so there are. 14:57:44 We have to look into high level synthesis dual tools that does this job and here we are not dealing with single gate or two gates we are talking in terms of millions of case. 14:57:53 So it has to be a commercial tool that does the job for us. And we have identified some mentor is the catapult that does this job of high level synthesis and on FPGA side to it's it's it's a new tool called White is from Xilinx does this kind of mapping 14:58:09 from pytorch or 10s of flows. So from from RTL it's it's a standard design procedure, so, so, so time I think will be spent mostly on the HR side, rather than the conventional design schools. 14:58:25 But, but by the end of the year we should have a one small prototype demonstrating this technology memory centers and cross boundaries. 14:58:34 Did I answer your question, Jim, Jim. 14:58:41 So while I can meet other people to ask questions so let me take them on TV with my second one, and it has been my you can see, we are size 11. 14:58:59 So, slightly number one. 14:59:07 Number one, just taught me how do i. 14:59:17 Yes. So, so you're kind of comparing but definitely open your notebook comparing it to the picture. 14:59:30 So my question is, if we can. So, as we know that there are implementations implementation. On the left side PJ base and insecure based digital your network. 14:59:52 Yes, comparing that to to the right side, we need to really go through our DC and agency cycle. Yeah, you're comparing the power consumption and different things in the system setup, take power to process through agencies will take, but here. 14:59:57 So, when you're talking about machine learning algorithms. 15:00:01 So we are taught, we are talking about millions of multipliers and accumulate operations here. If you go with the conventional one human architecture, where you have a memory unit, where you have a control unit and processing unit, and you will have millions 15:00:18 of exchanges from your automatic unit to to the memory unit, basically, and and that's where the majority of our will be spent. If you go with the one human or digital approach, and on the right. 15:00:33 What, what's the trend is, is, why can't we implement the dot product operation in one short, like my rates are already stored on each, so you have this horizontal and vertical rose, and the rates are stored in each of these device and this device could 15:00:50 be a memory store or or can be implemented with the conventional six transistor SRAM. 15:00:56 So, then, when when we have the input vector input vector metrics coming in. So, all all the multiplication send up. In addition, will be done, on that horizontal line itself one line. 15:01:10 But now, now it's not any more digital approach but but it's in the form of additions are happening in, in terms of currency here. 15:01:18 So that's why we call it an analog approach, and he says quite often us here are the flash indices for single cycle conversion. So that consumes bit of power but if you compare with overall our consumption with Mac operations, the digital uploads will 15:01:35 take a lot of power. 15:01:41 I see, if you have several layers of example. 15:01:59 Just concept or multiple your cell phone. Yeah, another matrix multiplication can be done. Just a layer by layer without any system. 15:01:59 Each layer here on the slide left with. So, each neuron layer should be mapped in the end to a crossbar array and and that, and then and then we will have for the nonlinear elements like the available or implementation for the nonlinear function. 15:02:19 And and connect several cross boundaries in sequence for the total neural network that that's that's the long lead that's the final goal I can say, so we are taking down this in two steps. 15:02:31 So first start with cross border is mapping to single layer. And initially we might have a DC FedEx on on the FPGA board itself, not even on the basic. 15:02:52 Then move this ADC Sendak's on onto the AC AC, as we test more prototypes and then learn from it. 15:02:55 Is that occasion. 15:02:58 And then also just the last time I'm looking forward to competing is comparing to the tutoring based on chi square. 15:03:10 Like, What's crazy introduced in the last 15:03:13 And hopefully can go to some crap because they still two complementary approaches sure that that's a good suggestion and this is also the direction that we are looking into like. 15:03:28 We obviously have to have a benchmark or competition of what we will again, if we go with machine learning algorithms, or do we really again with the chi square formula that you're talking about, so that that's again. 15:03:43 Work in Progress and then probably down the line we will compare, where we are in terms of error. 15:03:55 I was gonna say on the converse, it might be interesting to try applying cnn filter, or CNN. 15:04:02 And that works as using like recursive polynomials, for instance. 15:04:07 Yeah, cnn is ongoing. 15:04:20 I like and I can certainly show you the convolution neural network results to 15:04:32 squeeze. 15:04:34 So once again kind of the discussions we still have lots of discussion. 15:04:39 And I break it in here for a minute. 15:04:45 You stop us please. 15:04:46 Uh, I just wanted to ask the speakers if they could remember to upload their talks to the Indiegogo page. We've already had one request for one of the talks. 15:05:00 And I think it's even though I'm recording the session, it makes it easier for people to find a targeted talk to, to download if they're all uploaded to the Indiegogo site. 15:05:13 Thank you. 15:05:17 Love components, these up those slides and the rest of speaker, we have the secondary upload the slides and sensory much. 15:05:26 components, please upload the slides and the rest of speaker way of thinking, the secondary upload the slides and sensory much. Okay. 15:05:28 So, if there's no immediate question to Cindy, and. 15:05:33 So, let's go through seconds. So sort 15:05:37 of brings me to the digital side. 15:05:40 So some people like to share screen, please. 15:06:19 A few minutes early may be multiple. 15:06:23 Hello. 15:06:27 Yes. Would you like to share screen. 15:06:30 Yes. 15:06:34 It's going to be moving forward quickly so, yeah. 15:06:49 It's working well. 15:06:54 Do you see my screen. 15:06:58 Now, on the screen. It's working very well. 15:06:59 Okay. 15:07:01 Thank you. 15:07:02 I would like to provide some updates on our work on machine learning on a minute meditation for events selection. 15:07:11 First I would like to gain a bit my motivation and show some example from other experiments and in these Local Motors how it's related to our AC environment when few words about why neural network is interesting. 15:07:28 When, how we can use it and for what he's explained thought about optimization. 15:07:34 And finally propose some hardware solutions for experiment. 15:07:40 Motivation is the typical. 15:07:44 from 15:08:07 He example from CMS. This is typical structure of data acquisition, like it's, we have a detector we have a level one figure, and we have high level to do. 15:08:27 Finally, storage, and the differences between a level one and high level is always the same, despite of its level one can be in hardware FPGA basic or even modern for HTTP for example we use a computer based on trigger. 15:08:38 But basically, it's you have a same decision for it level one is done on we're very limited subset of parameters of events from a limited subset of the text of smooth adjuster energy some of clusters, for my direct correlation of threats and so, and only 15:08:55 after these were pre selected events with losing forward or is some part of physics, send you to a high level reconstruction for final physics a construction make a final decision on the servants and 30 something happens to sync, which is a composite 15:09:23 And also machine learning algorithms widespread in with these extensively using everywhere, like in their offline procession high level three here, and everywhere. 15:09:34 And these two things if you can combine it. We can use actually alpha, which is usually provided by higher level, like level three or so, and they can use the same algorithms and little one, which is much faster. 15:09:51 And with the season could be much more sophisticated and more clean for physics. 15:09:59 Machine learning methods already used a lot of fun with physics and also methods of learning and development. 15:10:10 We have some common but implementation might be quite different. For example, with typical info for, for many years they used for just to be only when it will try to improve performance using GPU, when I also consider a fusion energy accelerator card 15:10:28 for computer. 15:10:30 And finally, use pure SVG solution. 15:10:35 The differences between GPU and FPGA just CPU optimized for high throughput not for low latency provide extremely low latency at sub microseconds level. 15:10:59 And the into the FPGA can use an accelerator card for a computer but more nature. 15:11:12 Place of FPGA just do stand alone in the data stream immediately after that that on this picture you can see our proposed Pac adult and data acquisition scheme. And the idea is to inserted a machine learning phonics BJ in some place and more natural places, 15:11:18 right, between from them, and for data acquisition forum. 15:11:24 This concept is actually up aside idea to write on the disk. all unsynchronized it settings for mother text. 15:11:35 It might be just not impossible at some point you to background because of electron machines much more noisy when hydro machines. 15:11:45 Might be a lot of other problems, which we cannot estimate right now. And it will devise to use it at look at early stage of designs just to have a chance to use it later on if it would be needed. 15:11:56 Initially we can start just as possible mode, and we can hit the ground, rejection that's what we needed. 15:12:02 When if it would be again some problem of data size we can reject the processes, mostly marketing processes which has the highest cross, cross, cross section, which is maybe not the amount of data will data will be collected in a short time. 15:12:17 And finally we can add also selection at some local section events. 15:12:24 And it's not a replacement for pharma just complements the computer based form, and although it will be mitigate some problem if it happens in the form of software. 15:12:43 Actually proposal, you can see in this picture for AK these units as many the TechStars just provided data stream to him, dedicated. 15:12:46 If PGMLFQG machine. For example, in theory view, processes, hold a director at the same nother set of notice and provide local decisions 30 minutes alignment signal alignment MBC local this season. 15:13:04 Although for all the same for other possible trading the technical audience of Asia whatever the technology is used for physics. 15:13:14 All decision the local decision should be sent later on on were so called Global filter, and each, here we can all decide what to do or we can just label is data with some additional information from with the tech ops and pass again to wear final computer 15:13:28 form as it we can decide to cut some events which is obviously a noise background. 15:13:36 And finally we can send all the data, which up be selected in this data on the bottom picture on the bottom you can see just the way represent representation of a scale of, units here surely it says detector is large system, and he's usually set of crates. 15:13:52 What kind of crazy can discuss later and farmers usually multi level of storage. 15:14:05 Finally, in cold he will have a set set up for what tests, and right now we can, we are going to test to detector so then God and electromagnetic Quality Matters on electrons from three to 530 empty empty, actually a brief chamber restore data which can 15:14:23 which can provide political identification and also can provide track. 15:14:30 We collect some information and aggregate some information from a detector and sent this to for the season two when your own network. 15:14:39 This is by the way I forgot to mention this is for our own of offline analysis we already done it many times it looks like performance is quite good. 15:14:47 And for our training and for everything they usually use that can be a package with small delay of the spectrum. 15:14:58 Okay now always knowledge is this machine learning for our detector is a promising, and what can we do it in real time. Again, this is a several solution all of them can walk by the way, but I spoke only will latest, our solution which is a refugee solution. 15:15:15 Here we go. Later on, discuss what is the steps can be done in the way of implementation, our offline model to a online solution. 15:15:28 First of all, I would like just notice YFBJ is really interesting and what is very differences in terms of application of neural network from GPU and CPU. 15:15:39 You can see here, okay it's a natural original name coming from neuron. And then what we have we have a network with multiple mathematical calculation, with every connection. 15:15:52 And this is actually topology, and computer CPU and GPU calculated sequentially, or if it's possible multi threading holes or you can do by two, three or many steps. 15:16:05 If you will look on FPGA and where I bought position, it's actually already array in some topology, and quite easy to move topology of neuron to work on the top of the chain. 15:16:18 Again, it's not the programming it's not the loops and just, you got unnecessary connection in the setup needed connections. 15:16:27 Nowadays, modern FPGA provide a lot of so called DSP slices. 15:16:34 Nowadays you can have up to six to 12,000 your cheap, which is simplify our calculation for when your network base 15:16:45 And the next sightings provide tools or tools called a high level synthesis. Each actually help us convert c++ code which is a sequential sequential this loops into topology of FPGA. 15:17:02 And after work of this high level synthesis. It provides our resources dimension, for example, I spoke, several times and you can just notice right now, initial implementation, it's used 21% of these police officers what this is the most critical for 15:17:18 when you're on networking invitation for StG and high level citizens provide us so called ID core, which can be inserted a normal divided, we will design. 15:17:31 For example, various software called UC just synthesis synthesis synthesizers have a test bench for this quarter. 15:17:40 And you can see where result of this here, you to some differences representation of Willie because the original was in double precision and if eg realization of quarter is similar, not similar position but actually fixed point the format. 15:17:57 And you see very responsive network slightly different but it's not really object, our final decision because all it's just involved in differences are only in one beam. 15:18:08 Okay. Another good both for work with neural network in FPGA so or at least for me, this is provided by a group of people. So Excuse me, sir. 15:18:21 Looks like 15:18:28 I 15:18:37 ok I can continue, probably. 15:18:41 I by accident I press link. 15:18:47 Yeah, I'm here. 15:18:49 Okay go vote or slideshow company. 15:18:53 Okay. Yeah, and it's actually what is doing, provide us some parameters which can optimize our and neuron port for example, if you can not enough multiplier so these he, it can use a concert concert the youth programs are told us twice, four times four 15:19:12 Four times for the same neural network in other, it can optimize it with apology for example a lot of weights which is not really using in a final decision. 15:19:23 What's why it's just caught unnecessary multiplication in our network. 15:19:28 And here you can see it again. 15:19:41 optimize the results for example first optimization was in the accuracy, how our data represents, and you can see already here we have only 10% of our resource optimization. 15:19:42 And after giving cartoon unnecessary multiplication, we have dropped, our 2% of the spear idealization, but it's become almost order of magnitude, using these tools. 15:19:57 And we also stopped simulation for our college, and you're on network. You can see here with our, it's a very small neural network it's only three by three cells, and we have john simulation, that's provided by Peter Amanda and idea is just to use this 15:20:08 response of colored him for sales to identify by and select for some new ones, in principle, you can see here, explain that and quite reasonable for months. 15:20:24 He also tries to synthesize it but you do it right now it's quite small network. It's a by the way he latency so 69 seconds. It's difficult for with that type of your own thoughts. 15:20:37 And these people division lists and one person. 15:20:41 Okay, now possible hardware solution which we can think might be we can implement in their so called new collaboration. I see it ipv6 because we're a previous collaboration already have a lot of characteristics got everything it would be difficult to 15:21:00 And they consider it. Nowadays, a lot of physics using Soho boo to stand up say it's a PCA grades and open the big screen. 15:21:11 it somewhere. But this is a relatively new impossible problem. 15:21:20 What is it a receiving eight megapixels channels from the optic detector at total up to 52 gigabytes per second, and all processing of fees hits from the index done in one create its provide tracking reconstruction, and direction and so to help this session. 15:21:36 What is the advantage of is great. This is back lane of is great has a full niche, it's all interconnected back plane. And that means all FPGA on any balls can send data to any DJ on any balls. 15:22:05 And it depends on the algorithm how you're going to process it. 15:22:01 Another example is so called a GQ for panda StG. 15:22:06 We also use full mesh on the subject line it's a standard cold. 15:22:17 And what we usually usually about 2200 channels from STD detectors also received in one single create, which is fully interconnected, and they can analyze all the time. 15:22:46 very interesting which is when I consider looking on this solution I found it. This is exactly the same create without any changes just changed the algorithm for session, which is mostly software, not how they declined. 15:22:58 It can be used over for four kilometers without and processing. 15:23:03 Yeah. And finally, what is proposed solution, again, every detector just sent to its own local so called processor, but data should be sent all the one place just to have a chance to order and process all sales and told her expertise throws at one unit, 15:23:23 matter. He is pointed, etc. But it only also openly digs doesn't matter. After local politicians, it's again sending data to a global filter. 15:23:34 And again with same Id just making either end piece information for a high level fun to speed up the process now because all data will be already alignment actually alignment with data is something similar to even build them for were three years of it, 15:23:53 but means it a high level deca farm, already we will receive all that and to lighten the data and built in for a detector and decision can be done either here or just transfer all decision to a farm it's quite flexible and depending on where rate backgrounds 15:24:09 and situation can be changed every time. 15:24:14 Okay, Yeah, and find that outlook. 15:24:25 J shows a good performance and for a specific one or labor session and low latency filtering and also some tools which can provide a lot of help and simplify development of filtering based on neural network, because, because He intercedes social being 15:24:39 developed by physicist. And finally, the distance from the physicist or hardware implementation usually quite long distance and nowadays with at least it's an especially with, he lists for ml software it's quite simple. 15:24:54 And finally I just consumed it some point we can deconstruct it with level one or whatever you call it filtering fool that can be medical physics so answer sounds like. 15:25:07 Like you can see here, or maybe it just jets by production like from BJ wV and so that's it. Thank you. 15:25:22 Thanks so good. That's lots of requests to come into the last workshop and, thanks very much for the update for this topic, and then let me just pause for questions from the audience please. 15:25:39 I see Joe has a job. 15:25:44 Yeah, thanks for the nice talk. 15:25:47 I'm curious to know. 15:25:50 So, I think, putting machine learning algorithms, you know closer and closer to the detector is really interesting potential solution to reducing data rates. 15:26:02 I'm curious to know what your opinion is or take is on how how those algorithms will deal with cases that may come up that they were not trained on. 15:26:16 So for example if you know there's some additional being backgrounds or something that the simulations didn't have the, the algorithms are trained on. 15:26:26 And then, you know, you may or may not be writing data out that you may or may not want to keep or something like this. Do you have any comments on that. 15:26:35 Yes. Yeah, it's like all a network right now we're doing just before we're trained on something. 15:26:43 And if you remember I will point out that he can start his path through events and, finally, later on we can collect the data and understand what his background at what is event and we can make a new friend neural network to select what you would like 15:26:58 to reject and using these tools, it's quite easy it's just software process to hold the whole process for where from where it gets in case for off from offline to online think about I don't know, you've been kind of a company city from one hour to six 15:27:15 hours very complicated and neural network missing pieces. 15:27:22 Sometimes along. 15:27:29 But again, this is not a week so it's just a day, from my point when you understand what is it the ground, until it's the train authorities the ground. 15:27:34 Okay, thanks. 15:27:39 Thanks, boss. 15:27:42 Yeah, so, um, you had on on one slide where you showed Sergei The, the 21% utilization, that you got, and I was just kind of curious because I heard that with these high level synthesis programs, it's very hard to get very much utilization out. 15:27:58 Is there any expectation that with the, with these neural nets you're going to be able to get more than is a typical from, say, just a program and an algorithm using one of these using c++. 15:28:19 Sorry I actually missed the point what is the problem. Yeah, I'm trying to figure out how the 21% is what you were getting for utilization of the right. 15:28:26 So, that seems to imply you have a lot of headroom there where you can possibly make a much larger neural network. Yes. Yeah, what is the limit is the limit. 15:28:40 It is high level synthesis things I've heard can can not utilize very much of the FPGA. 15:28:44 But, do you have any idea how how you could go. Okay, yeah, I feel just maybe a little bit faster what I propose first before you can see here but total number of yeasty will believe 6840 inch right now is used 1415 minutes 51% of GDP. 15:29:03 And, but this is not optimized it at all it says quite long representation of values and so if if it would be not enough for example if it would be a larger network or whatever, we can go to over the stage. 15:29:30 right now, every DSP used only ones, which parameter to use one, we can use. If it's, we can use it twice. It's a shuttle do not produce it additional latency for example right now latencies 16 nanoseconds. 15:29:37 Let's assume it will be 120 nanoseconds but it's still quite low it's not really important. We can use fact have to set before and in this case we can have latency 240 nanoseconds but if we can release. 15:29:54 Not a lot of network. And after this we can also use this optimization, with seconds optimization looks like it's burning cough unnecessary, which is not really affect the output a connection, which is also every connection you at least want these deep 15:30:16 and. This picture shows you exactly, after us. First of all, just accuracy of the presentation my values. It's already reduced by a factor of two. And in addition, reporting for necessary connection in my neighborhood. 15:30:28 Certainly it's a little bit affect performance for example from 95% accuracy it's dropped to 81% of accuracy of neon for after his opponents, but again for online solution, maybe it's not so critical, that's why. 15:30:43 Finally, you can see the number of two instead of 21 where it says, really use a buy order of magnitude. 15:30:52 Our organization of our DSP which is critical for lunch your own pork. 15:31:00 Thanks. 15:31:03 I think next one is marking. 15:31:05 Actually I wanted to ask something else but with David white I don't think the HLS is really posing an restriction on the occupancy and you know of off of the gates, you can happily torture it, and it spit out whatever code it is. 15:31:57 fixed position arithmetic in in there right. And I tried that you had subtle differences with I Tripoli floats or doubles or something like this. And I tried to actually find a library that you could watch on the CPU, just see what fixed position is actually 15:31:58 doing rather than actually go into the FPGA with us. And for the heck of it I could not find a library that would just give the exact same numbers in all cases. 15:32:09 For that, that you get an FPGA was fixed position so something is, You know, if you want to emulate This was wondering if you had found something more if you have some advice here. 15:32:20 Yeah. Yes. Yeah, I use just exciting software away. A white all little bit impatient and to where it looks like is it in good development what we what we provide, I don't use external libraries I just use what provides bike silence itself. 15:32:36 And you can see this is the screenshots from experience program from the Viva Viva the chilies. 15:32:42 Yeah. Yeah. No, but I mean this one's and on the on the on the FPGA right. 15:32:49 What I was getting it is you can actually emulate a fixed precision arithmetic on the CPU just to see what what what the effect is. 15:32:59 Yeah. And then, but I never tried to just use external tools for simulation, another dose, which has more is actually more easy to use this one, h LS for Emily, you can free. 15:33:12 It said with the season for every layer, you can set preseason for input intermediate layer and output, whatever you like and it's quite easy right now. 15:33:22 That's why, if I would be asked what is the easiest solution just installed at least for a minute. And you will have quite nice simulation of takers. 15:33:33 Okay. All right, thanks. 15:33:37 That's my team from cell phones and that's username I couldn't even recognize it. So, 15:33:44 Ryan. No, sorry I, I probably mispronouncing name. 15:34:02 Sergei, trying to get their attention to you but he was my first name and city. 15:34:16 But anyway, I just want to confirm that he can give you a label on cost Well, we have some projects more than 90% of. 15:34:24 And that's because several features you have to know about when you're using a Cialis and simple at all. Just don't make your c++ program way along. 15:34:35 Otherwise he will have a problem because of where alone control and change. And too many sequential funnels, and then it can choke. 15:34:46 Also, in some situations you might drop your frequency from 250, for example, negative, 225, to what we did, but is this, He also was very capable and the in general doesn't cause you any likes, shares limitation so don't assume you can use on the 20 15:35:06 or 30% you can get almost to the maximum that you. 15:35:10 I think my, my comment was based on some word on the street from years ago that sounds like it's no longer valid. So I will be happily accept that, that's no longer true. 15:35:21 Yeah, but I'm the one who started the balloon it feels few months of using some, you know, with experience but there's one of you along with how to go around it. 15:35:30 Yeah, everything good 15:35:52 I just point out one other thing related actually to that and also what Martin was asking about the. 15:35:43 The hos libraries they give you the c++ libraries that do the fixed point representations. So furniture and you can pick whatever resolution that you want. 15:35:53 And that's what synthesize and that's the accuracy you get on the FPGA, and you run that on GCC, and that library also gives you the same results for the fixed point arithmetic to whatever precision you ask for. 15:36:07 Okay, I think I will retry this because that was actually the way I got different results but okay. Yeah, I think, I think you're, you're right that maybe along using floating point numbers might have an issue but I think for fixed point you should have 15:36:21 that level of accuracy between the two. Okay. 15:36:27 Since people pointed out in so many useful. 15:36:30 Well, as offline. 15:36:32 So let me just. 15:36:42 now to, maybe, I mean, I mean it was my turn again to ask another question to second base. 15:36:47 So, so, so it says still back to the resource utilization, so I assume, when you show tourism utilization of optimize the new network, or GMC it is for one channel, and the latencies read about about in process to that so these are properties to the network. 15:37:08 So my question is, when we have 100,000 channels and we need to process everything crossing. How much resources. 15:37:22 You were talking about the latency or about the size of network. 15:37:27 Because latency is a highs and we need to multiply the FPGA to processor multiple processor multiple sub events, senior previously. So therefore, latency sale for I think he's also at least also question before I'm just wondering how how does this very 15:37:48 So, I'm just wondering how does this very nice demonstration scale to a fleeting level, especially when we have a great live work. 15:37:55 Yeah. And if you know if I think I want to get put some effort just to understand how we can compress our neural network. 15:38:04 It seems necessary for these neurons work but again, this is our only test bench which is quite small and short. That's why we also trying to estimate what we're here for size detector. 15:38:15 But what I found its foreign refugee. If you just grow in size of a neural network, or whatever resources used. It's not increasing your latency activity will be come to a walker. 15:38:29 Not enough resources. And because most of us have done in parallel, you just consume more and more resources but it's, they still on the same level of literacy. 15:38:39 And you finally reach where limits for free resources finally to try to optimize or again for not by mentioned amount of ditch be used and song. Only in this case, I expect some problems with latency but, again, we also considering some additional resources 15:38:58 for input, our transfer and sorting console, that's why. 15:39:03 Right now, we'll be looking for what is a neural network itself, consume and what will be input output and sorting are very cancel, its work is still started you can see we just in a very initial stage and I hope you will know more near future. 15:39:21 Yeah, so it'll be very interesting to learn. 15:39:26 And just remind me for the gym God, after, after optimization, how many parameters you do have. 15:39:46 Just let me check off the 15:39:43 regression needed a free sample half million parameters, but they're seeing the categorization network off the all optimization on the only candidate 77 DSP used. 15:39:56 I still don't know where to find the parameters of action multipliers because the amount of these be in use usually difference not only on the amount of notice but also on the representation or flows, sometimes for the long floor. 15:40:18 to the skill so that's why it's not always the answer but usually I look in on their own way. Which of these do numbers. 15:40:21 And what is the actual size will not work maybe it's also provided somewhere but I don't know. 15:40:26 I see I see make sense. So, also I call your previous comment regarding been tracking are currently funded, they also need a conclusion and network is that easy hot very high for me to space to. 15:40:39 So yeah, it was positive results of us. 15:40:45 Right. Interesting. 15:40:45 Yeah, but we are actually working right now on the San Diego come to his problem with the funk and change and Verizon last time, and it seems that 2017. 15:40:59 It's can synthesize quite watch CNN, but right now for some reason, we are right now working on the same problem, trying to track findings on with us and seeing them but it's failed to synthesize for some unknown reason for me. 15:41:14 Maybe we discuss if he has been again because probably he knows a solution. What is a problem. And in any case, it's a topic with football become very forum so for at least for me, and hopefully in the near future, we'll fix this problem and we can again 15:41:29 synthesize watch CNN networks. 15:41:34 Everywhere interesting to follow up on that. 15:41:38 So let me just hospital the last time, and ask any questions. 15:51:02 Okay, thanks for coming by to cycling session, and let's continue, starting with one of this confession and now skipping real quick, can you please. 15:51:20 Hello everyone. 15:51:23 The title of my talk is machine learning data compression and noise filtering for real time data. And in this talk, we explore the visibility of machine learning, or more precisely, learning for data compression and noise filtering real experiment data 15:51:47 can sometimes be too noisy and too large and expensive to sit in persistent storage limit, without compression and noise reduction. For example, from the diagram of detector readout and data acquisition system. 15:51:57 On the left we can see that bandwidth to the computer class here is around 10, or a bit per second, but we backed the bandwidth from the computer cluster, to the storage to be 100 times smaller around a point one terror bed per second. 15:52:15 And we hope that machine learning can help us achieve this goal. 15:52:22 No we use in this research is the time projection chamber or TPC data. There are two reasons we choose TPC data. The first one is definitely his popularity as the main tracking detector for both relativistic have the ion collider and electron I am quieter 15:52:41 experiments. The second one is that to PC data is challenging to PC data dominates the data volume so we definitely have to compress it. And the second reason is that typically data is noisy sometimes it may contain over 50% of noise from the experiment 15:52:59 background. So we must do noise filtering. And we also have to develop our algorithms to be efficient. And like we did to match the high throughput of the TPC data taking. 15:53:14 And this is the workflow. We first developed a tech Hector model, and we do simulations, and we obtain a 3d data frame as the one show on the right. 15:53:31 And we prepare ourselves for the toughest scenario. In this study, where is the 10% Central gold gold coalition with 170 kilo hertz pile up. This is the busiest events in as Phoenix. 15:53:43 Now let's go into, into the fine production chamber, in order to get a concrete idea about the data read. We're going to deal with time projection chambers, basically, cylinder with three group layers. 15:54:02 Each group. Each layer group has 16 layers in it. And there are 48 in total. 15:54:12 And each layer the detectors are awareness in the rectangular, greed, and we have a 498 rows, along with the axis, and we have 2304 columns for the outer layer group, and 1536 columns for the middle layer group and 101,152 columns for the inner layer 15:54:35 group. 15:54:37 And also, the data format. 15:54:41 As you see value is a template integer in a range from zero to 1023. So, I support a number of boxes. 15:55:04 frequencies, around 20 megahertz and free frequency is 80 kilo hertz means unconquered data read smelled city terabytes per second. 15:55:15 I want to mention that morning. 15:55:22 There is a talk by taco Osaka Gucci Semper as his algorithm achieve average compressed data rate, around two terabytes per second. That is one fifth compression ratio. 15:55:36 Before we dive into our approach. I want to give a brief summary about lossy existing lossy compression algorithms. There are many existing compression algorithm designed for simulation. 15:55:49 Happy scientific data, represented by dense matrices of high precision floating point values such as sZ, that is an Arab bound lossy compressor for high performing compute, come. 15:56:16 Compete compete, high performers computation. Sorry. 15:56:07 HPC data. And there is also a ZFP, that is a compressor for multi dimensional integer and floating point data. And also, I'm card that is a multi great adaptive data reduction approach. 15:56:32 However, they all have share what they should actually do a problem for our goals. The first one is that they're all kind of hunt crafted, and I have to be mentally tuned to sit data. 15:56:36 And they also missing on learn about noise filtering mechanism. 15:56:41 Now we can introduce our approach, we call our approach, convolution no noodle and coders actually there are three components in the name neuro convolution know and uncoated, I'm going to give a very brief introduction to every one of them, each one of 15:56:57 them. The first one is neuro neuro is for artificial neural network. And amen helps machine to learn a function, just as the nervous system does for a living, or gets organism. 15:57:10 On structurally. 15:57:28 Amen has input layers output layers and some hidden hidden unit in between hidden Eunice takes input from previous leaders and apply a nonlinear function and output a number to the next layer. 15:57:33 We also call this. We also call this units and activation to reflect the fact that there are mimicking real nerve cell. 15:57:47 As I said before, the. 15:57:49 Each units takes a linear combination from the outward from the first from the previous layer, and the learn about her part of nnX is exactly the ways and bias of this linear combination. 15:58:05 And then alerts, via feedback from last function evaluated on his output and exactly how learning is done is but youngest dope of this talk, but I want to mention that similar or even the same setup of an nn color, totally different things in different 15:58:24 feedback. I eat a different loss function can can make the same setup off in and learn different things. 15:58:34 The second one is convolution neural neural network or CNN, CNN is a specific architecture developed to handle high volume image data. It gives the goal by a mechanism called parameter sharing in a diagram, I'm going to top the blue squares or the data 15:58:52 and the yellow squares are what we call Colonel future. We can think at the lower part of the network are this jello squares, the kernels. We can think of the colonel, as they're scanning that mesh and find patterns on the bottom I least a few type of 15:59:12 patterns that can be fined by the Kronos in different layers, we can see that in earlier layers, the patterns find are kind of straightforward like a lie in a particular direction or a particular color, and in deeper layers, the patterns kind of more 15:59:29 complicated. 15:59:31 The reason we use, which was cnn for this study is that is because his ability to handle image data, and is also relatively later rated, and also because of his pattern recognition capability. 15:59:46 Although encoder this flight shaped a neural network that has to parse an encoder and a decoder, the bottleneck in the middle is where we get the compress the data. 16:00:05 The output of output from the decoder is the decompress data, and we can send the decompress data to a task a specific loss function, and the auto encoder can then learn his own coding grew. 16:00:12 According to this task. 16:00:17 For example, if our goal is to Filter, Noise. 16:00:23 Then we can choose our function to penalize noise, and the encoder well learn, according to that not only compress the data, but also dumped noise in the data, one decompress. 16:00:39 These all lead to a very desirable property of new encoder. That is, is totally data driven and can learn coding grew to optimize a domain specific tasks, such as noise filtering. 16:00:53 As an example of ongoing outage encoder research. I want to mention this on competitive on the solenoid high granularity kilometer. And if you want to know more detail, just click this link 16:01:11 Says on one convolution new encoder we tried earlier. He has exactly one encoder. He and one decoder D, they both have for come illusional layers and from the encoder we had compressed, you know, this. 16:01:28 The format of note 16, and from the decoder, we've got a compressed it, you know. 16:01:34 And it also has a loss function 16:01:37 encoder and decoder are trained in and fashion, this we send it all the ways to the loss function. The last function provide feedback for the encoder and decoder to update his ways and get better and better at a task. 16:01:51 And I want to mention that this is a suitable for training with real data. Remember that we we need for this study with, we're still using the simulation data, that means we have the controls. 16:02:04 So we know the true label of ahead versus a real head versus a noise hit. So, but we're not using that we're still trading below 64 as zero. that and. 16:02:21 That and. and the value is above 64 as the real signal. So, we're still in the exploratory stage. 16:02:29 And this is suitable for training with real data because we only have one loss function we don't have to really know whether it is a real head versus a noise head, we just use a mean squared error. 16:02:43 However, this straightforward setup of Hong Kong are not really work that well. 16:02:49 The reason is because the distribution is kind of unfriendly to neural network. 16:02:55 This is a histogram of the ADC values, a lock scaled ADC values in log scaled access. 16:03:06 And we can see that this distribution is by modal accumulate around zero and values, about sex, that is six for in linear range, and it's very very unbalanced, we have, we only have around 10% of bigger values value above 64, or six in the lock scale. 16:03:26 And it's very skilled and have a very very sharp at around six. 16:03:31 And he also has a long and cylinder tail. Imagine if you plot this one, linear scale it will be really really thin and long, and just really not very friendly for new network because new network is supposed to learn tends to learn data distribution, somewhat 16:03:51 look like a normal distribution. Well, and it gives it really hard time to learn this kind of fit very unfriendly distribution. 16:04:01 In order to solve this problem, we develop this a double, double headed decoder scheme for an auto encoder is to have just one encoder. But we have now one decoder deceiver classification, and another decoder, Dr for regression and accordingly we have 16:04:25 to add a classification loss. 16:04:28 Now, we have to leave, we have two classes for labels, that is zero for an hour, another one for labels of sorry for values about 64 and classification decoder help us to tell whether this is zero or this is should be a value above 64, and the regression 16:04:49 decoder dr helps us to approximate value of two signals, what we think should be above 64. And now the decompress data is not just the regression results, but the regression result mask by the classification results. 16:05:07 I want to mention that this one can be used in a noise filtering, when we get the ground truth data, because you can modify this loss function, not recognized values above 64 below 64, but to classify real value real has versus noises. 16:05:29 And as for the training data to the convolution on your encoder. We think it probably a little bit too ambitious to input the entire chamber or even just one layer group entirely into as training data to the neuron coder. 16:05:44 So we choose a 30 degree sector along Atmos direction that is 192 columns from one layer group, and a half alumnus the direction that is 249 rows and just one their group as the training data. 16:06:01 And whichever compression ratio, one over 27. 16:06:07 I want to mention that, for this business events. We only can leak achieve one third. For asset. 16:06:16 And we get a min squared error around 60 hundred 80 squared. 16:06:21 That means court areas do quite large. That means we definitely need to do more study on how to adjust a network 200 data with a sharp zero suppression cut off, and then we can expect an improved MSC to set us free the comparison of the original versus 16:06:40 the decompress data, we can see that the decompress data is kind of blurred bad, we do kind of reproduce the global features. There are many local variations. 16:06:52 We have to quantify, in order that we have to quantify in the downstream analysis. That means, we do not know yet whether this blurring Well, how, how much this blurring well affect downstream analysis, and these are acutely section. 16:07:12 And just for fixed layer, and this is in the Atmos bi z direction for a fixed layer. 16:07:18 And you can still see some blurring here and there. And this one is layer by Atmos direction and this is layer, Iz direction for fix fix the fight for fixed asthma. 16:07:32 And this is the summary and the future direction. 16:07:36 In this word we're testing all the one quarter based compression and noise filtering network, high occupancy TCP data, and we managed obtain and 120 seventh compression ratio, while preserve some general global features, and the for future directions. 16:07:52 We have to optimize the depth of the network to find a proper decks because deeper networks are maybe, but her but also hard to train and then we also don't want to bulky network as an encoder because we have high throughput data. 16:08:12 And we also have to optimize the shape of the seeing them Colonel. Now we use. 16:08:31 Colonel five in a z direction cannot three in the Atmos direction. Three also for the layer direction, but we may have to choose a kernels of different size and. And we also want to integrating simulation tools into the training now we're using over 64 16:08:36 below 64 as the class but we want to integrating real ground truth labels into the training so as to, so as to improve noise rejection. And we have to test whether it works well for downstream applications. 16:08:56 For example clustering and tracking efficiency and position resolution that that has to tell how much the blurring will affect the downstream application. 16:09:10 And we also have to think about data acquisition hardware integration is still in the exploratory stage but we have to keep in mind that eventually we have to, maybe, carry over our algorithms to heart hardware level. 16:09:25 That's all, thank you so much. 16:09:30 Also thanks to be exactly on time. So let me just pause for questions. 16:09:36 Okay. 16:09:55 Okay, so 16:09:59 it's hard for me to step into asking side questions but 16:10:03 anyways questions, suggestions, Yeah, please feel free so without him. 16:10:19 All right, let's let's talk. One of the things that I don't understand about network neural networks like this is that after it's hard to understand what the machines doing so, when it comes to compressing data is, is there an issue that you kind of don't 16:10:33 have any physics or any, any kind of algorithms behind what it's doing. 16:10:42 When it's compressing data. 16:10:46 Sorry, what, what, what is your question again I didn't get it. 16:10:51 One of the things that were things that worries me about neural networks is that there's not a, you know, there's there's not a human understandable motivation or algorithm behind what's going on it just knows what's best. 16:11:07 And I was wondering if there's any. 16:11:10 If that causes any concerns when you're compressing and decompressing data. 16:11:16 Oh, okay. I understand what you are talking about is like the new network is kind of black box right. 16:11:23 Yeah, yeah. Yes. That is why I include this picture. 16:11:29 One talking about a convolution neural neural network, they're actually many many studies into what are those fixtures features are like, what do new network, network really see when they're do this scanning. 16:11:45 So, these are some examples of the features they picked up, like in an early layer, they may be able to pick up a line. So, for me, I'm not physicist, but for me, I think there may be some patterns for noise and non noise, and if the filters can pick 16:12:07 up a feature that will help the new network to determine, okay, I probably is seeing a victor because it is 45 degree, something like that. For example, if noise or tend to be 45 degree lines. 16:12:23 Then, if a colonel, there is a colonel that pick up that information, then he will be able to say oh this is very likely or noise. 16:12:33 Okay, thanks. 16:12:43 So next LPJ focus. Yeah I have sort of a similar question to Chris, specifically with the actual input data that you use so you're only taking a, maybe I missed something or didn't understand something which is very possible. 16:12:55 But you're taking a very small section of the TPC here, because the TPC data. As you know, the data right as a whole is very large ok that's fine. 16:13:09 But it's, it's not totally, I mean depending on all it's not totally clear to me that one particular, you know, small section of the TPC is representative of the entire TPC in a given sort of physics of that depending on what kind of event you're looking 16:13:24 at. So I'm wondering how that affects the, the results that you showed in a few pages where it was covering the entire face face of the TPC. 16:13:37 Oh, Yeah, actually this is a example of one chunk, we use as training data actually record the entire layer group into chunks of this shape and use them together as the training data. 16:13:53 And here I guess we do assume some kind of similarity among them. So, yes, yes. So your training on the entire TPC just chose to minimize the input data. 16:14:17 Yeah. 16:14:08 Yes, and to save some memory because if you use them together, then he will use a big chunk of memory right but kind of into small chunks, then the network can have the algorithm can one reusing small amount of memory. 16:14:26 Okay. I didn't understand that. So, thank you. 16:14:31 Thank you. 16:14:34 Again, I'll do is ask my police as a Trojan couples are coming down. There's also a reason for this particular choice of choking. That is the CPC breathe out, its first of all hardware Lee, Jim, are you into this particular shape sector shape as usual 16:14:53 here. It is a one single piece with you, and rjm do not share it with any other sectors. So this, so therefore it's isolated. He's a readout of space that's from hardware point of view. 16:15:06 And the second part, that this whole certain degree sector and the half ze sent out, and the readout. 16:15:29 Single setup fee and going into a single finish. 16:15:19 So therefore, the data from this particular chunk are we are end up in go into a single server. And if our input data space is larger than this community to multi server collisions and it's much, much harder to implement on hardware. 16:15:34 And so that was a strong just happen to be a single piece of without a gym and also happen to be going to a single server. And, hopefully, hopefully that's also clarify why he's interested in this collection. 16:15:49 Yeah, that makes perfect sense. Thanks. 16:15:59 sesquicentennial question so let me just pause for any further questions, we still have a few minutes for discussion. 16:16:10 Okay, it's not sexy again, thanks. So. 16:16:15 So Joe Would you like to share screen. 16:16:18 Yeah, sure thing. 16:16:23 I have to share the whole screen so that I can go full screen I forgot. 16:16:32 Okay. Hopefully you can see the slides. 16:16:37 Now in full screen so full screen and those are slices are working. 16:16:40 Okay, great. So thanks to the organizers for the opportunity to just give a talk on behalf of Oak Ridge National Labs so for those who don't know, my name is Joe Osborne, I'm a postdoc in the, in the computing division at the lab. 16:16:59 And, Oh, I lost my. There we go. 16:17:01 So, recently at Oakridge synergies between the physics and computing divisions and started a forum so Oakridge has quite a few people at the lab that are working in different sort of domains, but have a lot of scientific overlap and some computing specialists 16:17:20 in software development data reduction and streaming readout have been discussing with some of the scientists in the physics division on possible collaborations looking towards the gic, so in this talk, I'm just going to give some of the very broad ranging 16:17:35 data reduction work that's going on at the laboratory and sort of paint a picture of how this might be beneficial tour, as we look towards the gic. So just as a disclaimer before I get started I'm speaking on behalf of many colleagues at Oakridge, just 16:17:56 to name a few, from physics can read yo Shawn Bach and Freddie Bach are just a few collaborators, as well as others in the physics division. And then from computing and also Scott classic a Jason whining and Norbert Podolski 16:18:14 Okay, so I'm going to divide up this talk, and sort of two parts. Start with the physics division and talk about some of the work that's been going on there. 16:18:23 And then also talk about the computing division. And I will get to that in the second half. So just to give a short overview of what I'll be talking about the Oakridge physics division has a very strong background in electronics and so this lens very 16:18:41 well to working on several nuclear physics experiments in developing both readout, and sort of data reduction software for the Elise experiment and the Phoenix experiment. 16:18:56 So recently, the group participated in some published work on the upgrade of the Alice time projection chamber for the upcoming la he runs three and four, there also a major contributor to the Allison are tracking system, which was discussed a little 16:19:11 They're also a major contributor to the Allison are tracking system, which was discussed a little bit this morning in the officers talk in reference to the Phoenix silicon tracking system, and they are also currently working on testing some as Phoenix 16:19:23 vertex detector readout and electronics. 16:19:26 So the Alice TPC has undergone major upgrade in the last several years, and this upgrade is intended to handle the higher lead, lead right at the LLC that will be delivered, and the upcoming data taking campaigns and runs three and four. 16:19:45 So the sort of physics goals for this data taking campaign, or one of the physics goals is to measure rare probes at low momentum so this kind of lens towards a desire for some continuous readout. 16:20:00 But also, the actual physical TPC readout time window when compared to the nominal interaction rate requires that there be some continuous readout so there is good, both physics motivation and actual hardware implementation motivation to have this for 16:20:17 the Alice TPC. 16:20:19 So the TPC has expected data rates of about three terabytes per second for the entire detector from approximately 500,000 total channels so this is an extremely large initial data rate that is coming out of the detector. 16:20:33 And just to provide some sort of frame of reference for how this compares to the previous TPC Alice. This is an increase in the data rate by about a factor of 100. 16:20:42 So this is a pretty significant increase and provide some readout challenge that requires r&d for successful physics data taking. 16:20:54 So just to give a short overview of the TPC readout scheme, the readout scheme, sort of diagram is shown here on the left, where the readout channels are on the very far left, so the TPC red is right out with about 3000 front end cards that each contain 16:21:12 five sample chips. We heard about the sample chips in. 16:21:18 In, two cows talk this morning when he talked about the SP next TPC also. So each sample chip produces 1.6 gigabit per second data right. And so if you do the math there that leads to the about three terabytes per second data rate of the detector as a 16:21:33 whole. So these chips are ultimately read out on the FPGA that received the data and do some initial online processing to then reduce the data stream by another factor going towards some buffer boxes. 16:21:52 So the detector was assembled and read out has been fully integrated into the experimental cover and I always think it's nice to see actual pictures of people doing things because it's sort of makes it a little more real. 16:22:04 So this is just a picture that was in their paper from the actual installation, and the commissioning has been performed with x rays lasers and cosmic ray muon so this is a plot on the right that shows an example of a cosmic neon, and the. 16:22:22 The TPC performance with just a single track. 16:22:25 And of course, continue testing commissioning and preparation for the upcoming run three that will start in the near future is expected. 16:22:38 So I mentioned to that Oakridge is also involved with s Phoenix and some of the readout electronics for the micro vertex detector at s Phoenix. So, currently Oakridge is leading the readout testing and development for the Phoenix Nv TX. 16:22:55 And again, these nice pictures, courtesy of Joe, I think are very cool because, again, it shows some actual work that's going on. So they are actively setting up a readout chain test and in the lab. 16:23:09 And this is active work that's going on right now in preparation for the commissioning of the MV TX, in the next year as Yasser mentioned this morning. 16:23:20 So we've heard several talks from SP next over the past couple of days, and how streaming readout will be utilized with the silicon plus TPC, much like his plan for the Alice TPC. 16:23:32 If you look back in the last couple of days at Martin's talk and two towels and the officers talk this morning. 16:23:40 And thanks to Joe for these pictures also. 16:23:44 Okay, so now let's switch gears a little bit and talk about the, what's going on in the computing division. So Oakridge has a quite a large computing division, and is contributing to streaming readout and improve data reduction workflows, and many scientific 16:24:03 areas so the physics division has been focused really on Allison sex but there are several people in the Oakridge computing division that are working in many different scientific areas. 16:24:15 So first of all, Oakridge is very well known for its supercomputers like summit, and the upcoming frontier machine, these are sort of outrageous flagship supercomputers and are very GPU focused. 16:24:28 But Oakridge also has the kids Computing Center, which provides cloud computing storage data storage high speed data transfer nodes and heterogeneous CPU, GPU nodes that are available for use. 16:24:43 So caves was actually used or is used as a tier three data analysis cluster. For the Alice experiments so it is constantly processing data for Alice at some of the lower levels not immediately Of course online or offline, which is why it's a tier three. 16:25:03 So additionally what I will talk in some more detail about is some dedicated contributions to improving data workflows at projects like eater and the square kilometer right. 16:25:16 So, talk specifically about these publications here if you are interested in reading more about them. The first. 16:25:23 The first publication is regarding the fusion experiments like here. And the second publication refers to the square Kilometer Array work, that is, that was performed by some colleagues here at Oakridge. 16:25:39 So I'll talk about these fusion experiments now so for those who don't know, either is an international fusion project that's aiming to research fusions applicability as a clean energy source so it's a it's an ongoing project that's actively being built. 16:25:54 And it is projected to produce approximately a petabyte of data per day so this again is a extremely large data production that will need to be handled and dealt with. 16:26:07 And with this amount of data it really necessitates using large scale data movement and federated computing for data processing. So this is sort of a new challenge that either will have to face and additionally with, especially with projects like this 16:26:23 that have that have been under development for a long time, and a lot of work has gone into building them. Some near real time analysis to actually guide the experimental operation is strongly desired by the group of scientists that has worked on this 16:26:39 project, especially with the new machine as you are sort of working out the kinks and learning about how its operating near real time analysis can help guide scientific development much quicker, as, as even Marcus was alluding to, are insane earlier in 16:26:56 this session after Jim's talk. 16:27:01 So an example workflow that one would want to implement at a place like either as shown here in this in this cartoon, where you, you start with of course actually taking some sort of data, and ideally you'd want to compare this actual data locally to 16:27:19 some pre run simulation. So many of these fusion experiment, simulations are quite computationally intensive and can take days or even weeks on some very large computers. 16:27:33 And so ideally you'd want some sort of pre run simulation that maybe is not as detailed to immediately compare some data performance to, to determine. 16:27:45 Some diagnostic information. With that information if the performance does not match these expectations from simulation, then you could talk about streaming this data to some remote HPC where maybe you use trained machine learning models to detect anomalies 16:28:01 like we've heard about and use machine learning information based on those models to run, maybe more expensive simulations, or that are more accurate to understand whatever discrepancy was found. 16:28:16 Initially between the, the data that was taken locally and compared to this simpler simulation. And from that information then you could send that diagnostic back to the scientists that are on site to guide the next pre run simpler simulation, and the 16:28:34 next data taking cycle for performance diagnostics. So some kind of workflow like this is the kind of thing you could imagine, implementing that would help scientists, improve their workflows and guide the actual scientific project. 16:28:52 So as I mentioned eater is not actually taking data yet so this sort of thing can't be explored with either. However, there is the case star tokamak which is a fusion experiment is located in South Korea. 16:29:06 And so some collaborators decided to try to implement a similar sort of workflow. 16:29:12 With k star. 16:29:13 So this workflow that's shown here is if you compare it to the previous page it's actually very sort of reminiscent of what I just discussed where you make some measurement at k star here in the top left, and then you digitize this information and send 16:29:29 it over a wide area network with the software that was developed at Oakridge called Adios, in this case they sent this data to the Princeton Plasma Physics Laboratory, which has the available, compute center that was being used for this to do some analysis 16:29:52 and in this case they made a movie of some fusion process. Simultaneously, sort of more expensive simulation of this fusion process was run at Oakridge, and then you could compare this simulation directly with the data do provide some diagnostic information 16:30:05 and feed that back into the k star scientists that were on site. So the goal of this sort of test was to demonstrate this workflow paradigm that could be in principle implemented for either in the future. 16:30:24 So, in this study they showed that the previous throughput, that was being used, or that was demonstrated from previous software suffered from disruptions due to packet loss over this wide area network while transferring data from k star to PPPL. 16:30:41 So in this case, or in this plot here, what this shows is that the packet loss actually would lead to some throughput loss. 16:30:51 And what you'd have to sort of re amp up the throughput, and you lose valuable time, that is spent not actually transferring data. 16:31:01 The new software that they use that they wrote called adios was able to achieve high sustain that data transfer throughput of about 1.2 gigabits per second. 16:31:11 So, you can see in this plot this is the same sort of plot, which shows the throughput that you don't have these continuous ramp ups over some, some increment of time, they were able to actually sustain this data transfer for some longer period of time. 16:31:32 So I also mentioned the square kilometer array and some work that has been done at the lab. 16:31:58 the, the arrays of telescopes here will produce about five terabytes of second data rates. So, for the radio astronomy community. This is an extremely high rate of data. 16:32:05 And this is going to of course result in an extremely large data set that needs to be calibrated and reduced for actual analysis. 16:32:15 So to give some sense for how large of a increase in data right this is for the radio astronomy community, the SK data rate will actually be one to four orders of magnitude larger than current radio astronomy telescopes. 16:32:30 Well, sort of second implementation of SK will actually be one to two orders of magnitude larger than the first so this is quite a substantial increase in data rates that the radio astronomy community has to handle and understand. 16:32:46 So what this looks like is that they have many of these different telescopes that are providing some data. And what this equates to is having millions of files for a single observation that then requires a robust file system that can handle all of these 16:33:02 files. You need the appropriate metadata management and archival of data for all of these files, and ideally you need to process them all in conjunction so that your measurement actually makes sense. 16:33:16 So this table on the right actually shows some of the previous telescopes, and some of their data rate, information. 16:33:25 And I think the relevant thing to highlight here is that the in the input right column the second column from the right, you can see that the very first implementation of SK will have a couple of orders of magnitude higher input rate than previous telescopes. 16:33:42 So this is a major challenge for the radio astronomy community. 16:33:47 So, similarly to either the SK data must be stimulated since the experiment is still being designed and built. 16:33:53 So, what some scientists at the lab did where they simulated some mock up SK data to sort of simulate the anticipated data rates and see if they could handle these kinds of data rates. 16:34:11 So summit at Oakridge was used to simulate this data generation and processing at the similar scales of SK, and what is shown here is the data writing right and gigabytes per second, as a function of the number of summit nodes that we're able to be used 16:34:28 for this particular 16:34:32 table writing, and each color corresponds to basically a different configuration of table writing that the group was testing out, and what it is I think relevant to point out here is that they were able to achieve about point nine terabytes per second 16:34:48 of data writing data, writing rate on the summit supercomputer with this one particular configuration. So one thing that they did find was that the peak writing right is very configuration dependent as shown in this plot the different colors, actually 16:35:07 experienced very different writing rates, which is indicative that this configuration can be optimized and better suited for the particular type of tables that SK will have. 16:35:22 So work is ongoing to identify what is actually the best configuration for for this. 16:35:31 Okay. So to conclude then Oakridge is involved in several experiments worldwide, that are facing or will face data readout and reduction challenges. I talked about from the physics division, that the, they are involved with Alice TPC, as well as the Phoenix 16:35:49 MVTX at the relativistic every eye on Collider, which we heard about earlier this morning. 16:35:56 And additionally in the computing division there are people working on data workflow challenges for future experiments like eater, and the SK. 16:36:09 The, square kilometer. Alright, so thank you. And with that I'll be happy to take any questions. 16:36:29 So, wherever you're wearing. 16:36:21 And so let me help here to ask questions from the audience. 16:36:29 Maybe let me just appeal to my position again and ask them, so you list the quite a few very impressive example intensive mentor of a broader physics and the about physics and the can be can can take advantage of the computing without our critics. 16:36:47 So to give an overview, how do you envision our quick computing with a play in the instance of shipping data to our critics. 16:36:56 That's a good question. So, I think, with a lot of these with a lot of the things that I just said. 16:37:03 So while I guess I didn't say this for SK they will have dedicated compute centers at the array locations because the data rate is just too high so I was once told by a colleague that at some point when you have some amount of data the fastest way to 16:37:22 move it across the world is still by a truck or a plane. And this will definitely be the case for places like or for experiments like SK where you have these massive data rates that need to be at least at least partially down. 16:37:41 Initially, however I think the use of for example caves at Oakridge is an example, or the use of caves with the Alice experiment is an example that Oakridge has computational resources that can be leveraged for places like the gic. 16:38:00 So for example, you know, we could potentially use cases in a similar sort of sense where the gic will have very high data rates at the experiments and this will have to be dealt with. 16:38:13 initially, sort of, app on site. 16:38:16 But at some point, the when the data rates are are small enough, we can use outside computational resources that will not be located at Brook Brookhaven to be actually processing and doing further data reduction. 16:38:35 And this is true for both Oakridge and Jefferson Lab and, you know, any of the computational laboratories, you know, I think there are scientists right now at Lawrence Livermore who are using their computational resources also to to do UIC analysis. 16:38:55 So I think a distributed, or federated computing model is definitely one that the IC should take advantage of because that is especially the national labs that is one of their sort of strengths. 16:39:29 Yeah, thanks for the comment, and you continue a beauty of again. You mentioned about one pound to 60 per second. For the case and ppl data flow into Peters, if I read it correctly. 16:39:26 So, I assume that is our typical bandwidth athletes moment. And, I assume, it will be much larger at the year of 2013 and. Is that a good instance. 16:39:38 So the, the workflow with k star and PPPL was actually not using Cade's, it was simply transferring the k star data to ppl. And then there were some other simulations that were performed on other supercomputers at Oakridge to compare to the actual data. 16:39:58 So I don't know offhand what the actual data rate is that's available at Cade's However, I do know that we have Globus data nodes inactive decades. 16:40:13 So these are sort of high speed data transfer nodes that are available for use. Now, I just don't I can't give you an exact number, because I don't know offhand what that is, but I would bet it's on the kids website and I could probably find it in 10 16:40:30 minutes or something like this. 16:40:33 Right now get that out of the curiosity that yeah sure thanks coming. 16:40:41 Gonna be Africa, you might an equal to come into citizen to GOP out to the computing section internal. 16:40:49 Okay, you cannot let me just encourage you again to the other speaker in the session. I think has been very successful and I would like to thank everyone who participated in the discussion and according to our session today. 16:41:15 at 9am with the tech session, and also let me ask, Douglas, is there any particular announcements or forms workshop organizers. Now, thanks. 16:41:29 I just want to remind people if they would remember to upload their talks to the Indiegogo page if you have trouble doing this, just email me your talk, and I'll, I'll put it up. 16:41:33 Yeah. 16:41:34 And also thanks to create the course I remember, sending me out like a happier to posted after, after this meeting them.