09:00:07 I think we're doing a little better than that. 09:00:11 Okay, so I see bottom, a year so this is good. And 09:00:21 we are getting there. 09:00:44 So one How do, that's one that's one actually pronounce your last name properly. 09:00:52 It will be used to mispronunciations but it pronounced good run it again. Okay. 09:01:00 Run. Okay. So where is this name from the sounds like will Ukrainian or something matters. Yeah, a little bit Southern Armenia. Okay. 09:01:11 So, so we're done. Yes. Okay. All right, good. So it's also nine o'clock. 09:01:17 So welcome to the session so people probably know me Martin push the from Brookhaven map, and we were talking about this before, this little background that I have. 09:01:32 So this is a view from my home office over to Brookhaven laboratories, illustrators located lab, except it's 30 feet up in the air the picture. 09:01:49 Okay, so this is the data acquisition, plus plus session so it's not all of it is strictly data acquisition so we have at the end we have another talk about an AC, so we were just running out of space and the other session. 09:01:53 But we have a bunch of very interesting talks, and we're hearing a first talk about the ERSAPNIIVSIP that I'm always thinking at some busy signal some business software no but this is something else. 09:02:09 So, bottom now let me see if I get this right. Those young version. Okay. 09:02:16 I'm going to practice some more. 09:02:18 So if you want to share your slides and begin. 09:02:29 I have you see my slides. Yes. 09:02:33 Great. 09:02:34 Good morning. 09:02:36 My name is Martha angle run. I am giving this talk on behalf of my colleagues, listed here. 09:02:43 And we'll discuss the framework unification effort that the Jefferson love to use common framework for both streaming readout and data stream processing pipelines. 09:02:53 I will describe basic concepts that are used to build the foundation of the use of framework, and their differences between traditional way of doing things. 09:03:02 I will also discuss the advantages and disadvantages of their choosing the process. 09:03:07 I will show that the approach using micro services for stream processing is not a new endeavor at the Jefferson pub and the main data processing frameworks at the lab, in some ways, are already using this approach, I will spend a few slides to describe 09:03:21 the framework level adaptive dynamically optimizing stream unit level workflow management system, and we'll conclude with summary. 09:03:31 By comparing traditional triggered based data acquisition streaming readout system the other expect increased data rates and in some cases on the stick storage requirements. 09:03:43 We also learned that basic data components in the traditional back system such as the the concentrators event deal there's different level of infielders Event Hubs event recorders are functioning so called push system environment. 09:03:57 The exact reactive environment where stream in readout components will most likely function. 09:04:05 This is not the case for the traditional offline processing where the passive programming is utilized in this environment we always ask the event or here is an event, give me a chart. 09:04:16 This is a tracking up ID, etc. In all cases this is an actual processor for example this is to in the upper diagram is passive however the streaming case in order to avoid bottlenecks and be able to control back pressure as to must be active on old times, 09:04:35 and react on incoming stream of events. 09:04:39 So reactive programming enables event or message driven system system processing. 09:04:46 So what we are proposing is to build a streaming data processing application based on three basic components reactive actor adopting widely used terminology, we call them microservices. 09:04:59 I will interchangeably use them during my talk, data stream pipe that is a communication channel between actors, and orchestrator of the application during the operation a stream of the the quanta will flow through directed graph of reactive microservices 09:05:14 with a network of independent black box actors defined application logic. 09:05:21 The basic difference of the use of architecture compared to the traditional framework is that instead of instructions moving across actors have processes the data is the one that moves making actors programmatically independent. 09:05:35 Another important design choice or considerations that data quality or exchange across predefined connections by message passing, but connections are specifies external to the actors 09:05:48 data stream processing application is a network in the connected actors where the the process of accurate obstruction is presented as a data processing station in your sub, it provides for user algorithm we call them engines runtime environment, and those 09:06:05 data communication and networking user engine is released of network programming the facilitation and. Are you in general, and always gets data object, I think. 09:06:18 The only requirement is that the engine must follow simple data indeed out interface to be considered as a candidate micro service in the sub application that the processing station, also provides means for engine configuration and then scaling can potentially 09:06:38 free users, writing the multi threaded code. So you sub framework is three layers structures communication layer service layer and orchestration layer. 09:06:47 Communication layer or service bus is designed to transfer data, along with the metadata from service to service service layer hosts. 09:06:59 And the not sure if you see my pointer. Seriously, of course data processing actors consist of multiple data processing environments DBS providing runtime environment for actors. 09:07:10 It is responsible for the life cycle including deployment registration discovery migration and covering this destruction within a dp actors can be logically group for organizing different tasks specific domain service containers. 09:07:27 As usual, so if sub deploys a single dB per technology in a single node example, there is a going, there's going to be a single c++ the being the note however if application can position requires a Java or Python based actors, then the secondary be this 09:07:49 The front end. 09:07:53 fe is a special dp that houses master registration service where actors they use their their operational details, including addresses input output data types version, author and source ensure description of the processing algorithms. 09:08:08 Know that accurately station your framework service layer is federated each DB as a local registration of all services running on that particular data processing environment. 09:08:18 If sub provide standard services such as gateway, security and authentication services, the gateway service is required for an application which actors are running behind the firewall orchestration layer is where workflow management system. 09:08:33 Link deal data parsing application or residing 09:08:39 data stream pipe provides asynchronous pops up. 09:08:43 Communications Point to Point communication as well as defines transaction data currently transferring the the abilities using Google's for the protocol buffer, or flat Baffert. 09:08:54 Our This does not mean that it should be presented it broke above for each sub data stream pipe is agnostic to user providing provided data formats. And we transfer any data so you make progress utilization and visitation routines are provided for custom 09:09:09 data yet forever provides default data format, as I mentioned for the buffer flood buff supporting primitive types arrays of primitive types and complex trees of the of this race for the default data format framework take care of the the civilization 09:09:27 civilization. Here you see yourself transit of sorry. 09:09:38 You see yourself transit data structure, which consists of 09:09:46 the data address part with the meta data part, and and data, data actual data, data part. 09:09:55 So, which is will be most of the cases will be empty as a result or disabled optimization after I will explain it in a in a in a minute. 09:10:04 I would like to sprint quickly through a few slides to describe adaptive real time performance optimization that if support yesterday is designed to do. 09:10:13 So the main advantage of the actor micro service based system is that it can become a disadvantage I like to say, become quite complex direct graph of actor distributed across network of individuals hungry structures. 09:10:28 In this case the role of the application orchestration becomes quite critical. 09:10:32 We spend considerable amount of time and effort in designing workflow distribution orchestrator, as well as an API allowing domain expert who's not to design their own workflow orchestration systems. 09:10:46 First of all orchestrated is responsible locating user agent is presenting them as data processing microservices and deploying them based on domain expert requirements domain expert defines the application graph or composition using simple Yama fight 09:11:02 graphical UI. At runtime it listens and reports data processing actors performances that to stay consistent possible errors. 09:11:09 He's the input point for users, trying to design, or expand, expand the industrial process and application by providing an access to the registry and discovery services. 09:11:21 Most importantly, optimizes team data communication by optimizing after deployment. Here's what I mean. 09:11:30 If actor can be deployed actors can be deployed within the same runtime. So they are sharing the same technology, the same language for example, it will do so to avoid the the civilizations is the this case they the quantum will be passed between actor 09:11:44 through the process shared memory, avoiding civilization and copy so your copies of your civilization. 09:11:51 The same way, if it is reasonable to deploy actors in the same node. 09:11:56 But in different wrong times, then this is better than passing data across the network, and the positive shared memory will be used a supper club management system is equipped with adaptive functionalities to guarantee optimal performance and minimize 09:12:12 under utilization of resources in a shared the regionals cluster, as it is important to know service hardware instance type relationships, which we are calling services affinity to find a good mix of different hardware altercations become critical, however 09:12:28 job submissions resulting in hardware application has limited predictability. 09:12:33 Users submit a job to form with a few hints about the job characteristics including memory requirements and the ability to use specific oxidative like GPU. 09:12:43 If several cloud management system will define service affinity using real time calculated relative computing rates, for example, counting rate for the former time fly time of flight and micro service is a time spent for processing a single event on a 09:12:58 single course log or computing unit. 09:13:02 The sum over available processing slots, ti indicates performance particular micro service scale vertically over the MPI slots in this show scenario be gotten access to node, one being to note, one being completely the accelerator GPU at the deployment 09:13:20 stage, your sub orchestrator or workflow management system will deploy to heat based tracking micro services in a composition, effectively crunching the composition stream streaming events to both the chamber he based tracking micro service in the pipeline 09:13:37 pipeline so after running any events or time apply these CHB CPU base DC HP GPU base will report computing rates to the orchestrator at that point or guess they will calculate process share variables PG and PC for both CPU and GPU branches of the pipeline, 09:13:55 and then decide which branch to keep for current and production, due to the fact that the farm deployment, there are many moving parts such as data civilization data copying from CPU, GPU, for example, job landing hardware performances etc makes ahead 09:14:11 of time deployment optimization, sometimes unrealistic so this is very useful. 09:14:17 Adaptive functionality. The use of orchestrate is also capable of monitoring device occupancy constantly in real time it will adjust streaming data quantum size to achieve marks marks later occupancy, as well as orchestra is capable of defining new sockets 09:14:33 structure of the node that application has landed, and will deploy and run Newman node pinned multiple applications, instead of running a single one in the node. 09:14:44 The slide shows you sub streaming data pipeline, including SRO data acquisition and data stream processing actors, I'm aware that there is a consensus that the icy sorrow, to simply proceed streams without much of a processing. 09:14:59 You Marvin in your talk, even suggested to write streams without obligation, yet I think that reasonable data processing with or without data reduction would be beneficial to speed up later the the process. 09:15:11 In the end of the day, what matters is the time between data taking and publication, we have to factor in IO latencies even if they happen at different times in future. 09:15:22 There is none of this slide is to show how the same framework can be used for streaming without real time data production reduction or software triggers, as well as near real time process would be the quality assurance and finally full offline data processing 09:15:59 and physics analysis highlighted. Our compositions that can be considered as a building blocks of notification. For example, the left side consistent two streams 09:15:50 to create streaming data into the data lake, or disobey the buffers, then aggregator and edge process, they can be it find their boss create level boys reduction at the root of aggregators are creators three. 09:16:04 We get the single stream of build cleaned the partially processed events that can continue, continue being processed, or not. Before getting processed on this. 09:16:15 I'm not advocating to build a complete event, before storage, this is, this data pipeline graph can be for a detective, or for a subset of detectors so we can have reduced number of streams and possible reduce data with no loss of physics, of course, 09:16:30 persistence ready for offline analysis, I want to mention. 09:16:35 This is not a concept, the highlighted constructs are deployed and tested, specifically offline data processing which which based on use of model is in production mode evolving for almost 10 years now. 09:16:50 So along are combining effort to design of the streaming data acquisition and processing system we have to decide where to do some of the routine operations such as aggregation. 09:17:01 Yet in order to stress the system and understand resource requirements in case we will do a stream obligation in software, we developed a benchmark PTP stream aggregators and ABC builder microservice engines, along with, with the data lakes are softer 09:17:25 So below is, is the cartoon illustrating obligation and each processing algorithm that provides per 32 nanosecond timestamp creates low channel and integral charge data. 09:17:44 I'll go to the simple obligation is simple average of 65 Microsoft for frames, including flush etc. It's time stamped timestamp everything millisecond. 09:17:49 Zero suppressed. 09:18:05 So each processor will group, all the of the SABC he's for a particular timestamp and will be called information and present the data, as we integrate charge 09:18:03 Holby is using reactive actor base data processing paperwork since 2011. 09:18:08 So the offline stream data processing is well tested both you know flying reconstructing the piece of data analysis trains. 09:18:15 There, there was an effort also to run cluster reconstruction for the extraction microservice application online where instead of your service, a year. 09:18:28 That, that creates a stream of events from a file. 09:18:31 And we deployed Kodak event transfer system status station series. Those are speeding events of the data shared memory during the data production. 09:18:57 Clara framework that satisfies most of the design requirements of yourself, is a mature framework producing physics data on the farm example last year, 35 billion CPU ours was used to produce half a petabyte of vs Ts and about 20 terabytes physics analysis 09:18:59 data. 09:18:59 All the engines, developed by detector experts are single threaded I would like to mention and Clara scales them linearly over the available course in a node. 09:19:08 Under curve, feet is proof of the linearity linearity of the scaling, that shows the deviation from the curve when we cross physical core boundaries, each sub framework provides an abstraction for the processing or service where user is responsible supplying 09:19:27 data processing algorithm and presenting it as a concrete data processing actor. Jana to framework that is in operation since 2005 is is used not only at Whole dqx but also ese be dx, as well as doing the tribal space stream, as our test runs at all be 09:19:46 deep inside Jana to is utilizing graphic programming cable and embracing to the Virginia few future HPC systems. 09:19:55 This is a modern c++ based multi threaded framework that is used to develop high performance data processing engines and algorithm vertical escape across multi core system. 09:20:06 As you can see at the bottom left bottom right is a cartoon showing Jenna to based micro services within the yourself ecosystem. 09:20:17 Recently we established a collaboration with traders group, bringing their rich experiences perspective on streaming without system. 09:20:25 The slide shows the proposed integration of the traders components within the yourself. I want to mention the importance of diverse new fresh ideas that can prevent mistakes due to overconcentration as a pre summary, I want to emphasize advantages of 09:20:43 independent reactive actor model, or microservices. It forces to perform functional decomposition of the overall data processing application into small mono functional artifacts, being small it easy to understand and develop, it definitely will reduce 09:20:58 the develop deeper debug cycles, rapid prototyping, and it can be easy migrated to the data can be scaled independently, slow, slow, actors can have can have get more resources and can be transferred to run on accelerators. 09:21:15 So, some of the functionalities of application require different approach in terms of optimization example I optimization really are different than memory or CPU optimization. 09:21:25 So in in a traditional monolithic environment, any any segmentation fold any exceptions will bring down entire system which is not the case for this type of applications where folders, very nicely isolated and can be easily tolerated. 09:21:42 And also being small components can can easily thrown away and rewritten in the new technology so there is eliminates long term commitment to a single technology stuck. 09:21:56 And I am at my summary so so reactive accurate model basically the simplest framework is under development depth Jefferson love. It is based on mature frameworks. 09:22:07 The laboratory it, we did perform research requirements studies showing feasibility of near real time data processing, in terms of resource utilization, we defined proper interfaces for reviews they the classes and accurate artifact integrations. 09:22:23 And this is a collaborative effort between Jefferson love to division physics and CST, and we have international collaboration will try this group. 09:22:33 Thank you for your attention. 09:22:35 All right, Darren thank you very much for this very interesting talk. 09:22:40 Let me just comment on this thing that you said about the processing. 09:22:49 You know that I, I did say my dogs that we are not doing a lot of processing but that's kind of special speciality of the heavy I am data that we are taking like because there isn't actually that much that you can do. 09:23:01 I don't think this is going to hold for the big data so I think we will have a good amount of, you know, the higher level triggering really going going on there right so i mean like I just please don't generalize my statement for apologize for the misunderstanding 09:23:17 right. Okay, so I don't actually see any hands so are there any questions oh yeah this Jin Jin, go ahead. 09:23:30 Sorry about that. so very interesting, bring Walker and also photo of the designs of framework. And could you could you comment on how to handle the fault isolation, and especially during large production. 09:23:40 How do you isolate right crashes and present data to the developer. Oh, so there was a code that so that they can be department. 09:23:49 And the second is. 09:24:00 Okay. So, interpretive for full isolation. For example, if I will show this. 09:24:11 Say for example, in spite of reconstruction application which is based on detected component based microservices here, right. 09:24:23 Any abs, so everything is, is there a couple by the data, only, there is not any programmatic dependencies, even though if they can be deployed within the same process they are deployed as threats. 09:24:36 So, one thread goes down in her application is still be will be active, what we'll be seeing at the end of the processing chain is that the stream is is disrupted so we don't have a stream of events. 09:24:51 Okay. However, and and also orchestrator, which is constantly getting getting information from every micro service in the chain in terms of their functionalities. 09:25:05 The processing rates etc etc. Again mentioning those, those are independent. They are the orchestra, it is it the independent is not part of the processing chain. 09:25:16 So we'll see that that information is missing from that particular microservice particular chain particular element of the chain. 09:25:27 So, we will definitely we will quickly understand. So first, there's the there's the stream is disrupted. 09:25:35 And one of the or the, at least at the first in the chain micro service which is supposed to process send the information is not sending information that is indication that the fault is within that particular element of the composition. 09:25:52 So, in case a constantly one of the components is crushing. Okay. Okay, so orchestrator can can restart that particular engine as a micro service and restart and continue the processing however it will start crushing that will be reported back to the, 09:26:13 to the user. The saying that a your engine is is failing. So we cannot go further process. If the domain experts in his composition mentions that no matter of what what happened, if this is failing just bypass that particular engine, it will do so, it 09:26:31 will redesign the composition, and run it again so there are lots of, lots of hooks there, in order to do fault isolation and full fault tolerance. 09:26:43 Okay, that makes sense and my question is about both of example, if there's a tracking code. And it's crush to minus seven time and thinks it's tracking code we cannot be bypassed. 09:26:52 Yes. 09:26:54 So how that's a good handle that doesn't the whole process to get stopped because we tend to manage seven scheduling. 09:27:05 Oh, how many, how do you represent that crashing part of translates back into the developers so they can do like a memory face. 09:27:12 Oh yeah, you mean you mean, yeah. So you mean what what happened along the chain, what you're saying. 09:27:20 Yeah. And how does he to move forward. I mean, it only crashed into my seven time of the data chunk called as is moving forward to the next data chunk, you know, in the broken one. 09:27:36 Yeah, so it again it depends of the design of the application if designer says if designer decides, so if it is very tiny for particular events, so it will just ignore that event and and pass it to give it the second events to it in Greece. 09:27:55 For example, if if he's completely stuck, that say charged particle reconstruction. Okay tracking it just stuck, then it will be restarted. And given the next year from from from the, from the, from the, from the stream next event from the stream. 09:28:12 Of course, those those data lakes will be buffered constantly a lot so there is a limit where where after buffering, we have the choice if designer says, okay, data lake. 09:28:25 After the events are filled up into your memory. If there is no enough space in the in the in the memory is everything filled up high watermark, then start dumping into the desk. 09:28:39 Yeah, can I just jump in here for a second I mean like button first of all you miss the opportunity to say, What do you mean crashing code, our code that's the question but, I mean, so, so dense question is in the series in the sense right i mean like 09:28:52 if you have such low level of problems. It doesn't have to be crashes or so on right i mean you might easily introduce some kind of bias because typically it's a multiplicity lens or something like this. 09:29:02 It's a special kind of of data that might actually crash right so basically but I mean like what I heard is basically that you Okay fine, you move on, but eventually you're going to investigate why this actually happens and right on this level. 09:29:15 this is what you, that's what he was saying right. Yes, yes. Okay. 09:29:20 Okay so, um, so I see your hand up so let's. 09:29:29 Yeah, I actually have to connect the question actually if you can go back to the slide, you've seen before I think it's 14. 09:29:34 So it's this microservices. One common problem is that you have to write a lot of messages. Right, right, and unpack a lot of messages. So do you actually write messages between each of these blocks, or is this somehow optimized out. 09:29:54 And the way it's written, it seems like the whole data is passed from block to block. Is that true or is that actually some optimize too. Yeah, that's definitely optimizes I mentioned here. 09:30:07 So, this black lines are that are the controlled messages. Okay, you will have very tiny control messages, which also can go through the IP. 09:30:18 And again, The entire data is optimized, and based on their deployment mechanism if again, if there are deployed we need the same process. 09:30:29 There is no any data move movement at all, it is in a shared memory and and the. 09:30:41 And here in the metadata, which is passed as a control message passed to the oldest we will have this control messages past. So, will indicate where the data is within a shared memory or it is within the envelope in a civilized way. 09:30:51 So, so always based on the, on the deployment, which is optimized. 09:30:57 They will be trying to get minimum minimizing data copying and data civilization Okay, okay, that's great. Thank you. Okay, so, um, okay, thank you very much again. 09:31:09 So this was a you know like kind of very lively discussion nice talk so thank you again, we're going to move on to David ever in a second but I see that you speaker after Mauricio. 09:31:21 Could you please upload the slides or sense to Douglas so that we get some so you're the only set of slides that still missing. All right. 09:31:28 Okay, I but I couldn't. Okay so sensitive. Yes. 09:31:33 And then he will. 09:31:44 Okay, great. Okay, um, so we are staying in the orbit so David Everett, he is going to tell us about the FPGA based hardware systems, jail so, yes. 09:31:50 Okay. 09:31:51 Thank you. Martin. 09:32:12 Probably just share my desktop. 09:32:14 Yep. Okay, yeah. 09:32:16 Okay, so bring it. Yeah, exactly. Okay. 09:32:23 All right, you can see that. 09:32:29 Okay, so 09:32:33 I'm going to talk today about, you know, I guess follow up a Martin. 09:32:41 Talk eat. 09:32:42 They're focused their group was focused primarily and the back end processing. We're going to shift gears and look at what some of the things we're doing at Jefferson Lab. 09:32:52 To support the front end and experiments. 09:33:05 Here at the lab. A lot of the hard work is as for this FPGA based hardware systems was done by Ben right Oh, and so I'm going to be presenting a lot of the work that that you know he's been involved in. 09:33:26 Okay so, at Jefferson Lab. Yeah, we have, you know, for experimental halls, they're all running with different detectors and they have different physics priorities and interest in the halls. 09:33:41 And, you know, future approved experiments are backed up and prepping for their turn on the floor. And, and, of course, you know, they all have increased demands on the data acquisition. 09:33:56 And we all know experiments are increasingly becoming increasingly dependent on customer electronics to interface our detectors and digitize the signals and a six and FPGA, are, are the norm and the future for the front end. 09:34:15 and obviously the focus for this workshop is the streaming model and interest in that for, you know, upcoming experiments is growing quickly. 09:34:26 You know, we've we've had a number at the lab, number of proof of principle tests for the streaming and several of the halls that have been, you know, talked about in previous workshops. 09:34:37 So our goal is really to try to support both the traditional triggered model, which which pretty much all of the existing experience and many of the upcoming near term experiments are still going to be using, but we want to introduce the streaming model 09:34:56 and support it all within one integrated, you know, dac framework. So we're going to try to use the existing hardware that we have at the lab to implement the streaming and the process will add support for for new electronics that come in for different 09:35:14 experiments and, and we're going to try to make, we're trying to make this as seamlessly and you look user friendly as possible for the, for the users for the, for the data acquisition system for to use the data acquisition system for their experiments. 09:35:35 So here's the Kota is our is our no RC back equivalent at Jefferson Lab, and it's used in various forms and all experimental halls. And as I said before my focus today is going to be on the radar controller, which sits at the front end and is responsible 09:36:00 for collecting data from all the hardware, and then sending it on to the next stage where we're at and company can worry about it. 09:36:11 And so we're going to this talk is going to focus here and what we're doing to update the front end to support the next steps, next stages. 09:36:24 So, you know, traditionally, we, we, we use a lot of the me based front ends, and everybody's quite familiar with these because they've been around for a long time. 09:36:40 And what we typically see in our b&b front ends this is that we have our digitize reports which sit in the crates. And, you know, this is, this is a pretty standard for us we have a, an Intel CPU which runs a software readout controller in the, in the 09:37:00 crate, we have all our digitizing boards in the crate. 09:37:05 We are clock and trigger distribution system we have a VME based board for that as well that brings in the clock and the trigger to allow us to, you know, and it provides both triggers and timestamps and and clocks for all, and, and, and clocks for the 09:37:24 crate as well. 09:37:27 And we're able to read out the over the VME bus for triggered data using some of the light middle last designs, we were, we can get up to 200 megabytes per second, over the b2b bus. 09:37:45 Over the VIP bus. But our output is is over standard Ethernet, to the next stage event building, and the CPUs have typically have a one gigabit like, you know, off of that detected off the crate. 09:38:03 So, this is, this is where we're stuck with. But when you look at, at the capability of the hardware which is being plugged into these crates. for instance, one of our 250 megahertz flash ADC boards, it can generate. 09:38:14 You know 48 gigabits per second for all for 16 channels and a single board. 09:38:20 If we put up to 16 modules in a crate. We're now organic, you have the potential for, you know, 700 plus gigabits per second. For a full crate. 09:38:34 Now, nobody wants to deal with all that data, but the fact that we can generate it creates a situation, and particularly for many experiments, we're cutting things down to less than one gigabit per second off the crate is just not going to work anymore. 09:39:01 And this is a bottleneck. It's not going to work for the triggered readout, and it's definitely not going to work for the streaming. So, so what are we going to what what are we doing, and, and here is where we talked about it. 09:39:09 You've heard it several of the talks in earlier in the workshop about upcoming experiments and, and one of the things that Jay lab did on our 12 Chevy upgrade is we, we decided to use this VXS standard, which is basically an additional. 09:39:33 In addition to the VME back plane, you have this dual star switch serial back plane, where every single day me slot acts as a payload slot, and has four lanes of up to. 09:39:52 So, up to a total of 20 gigabits per second. Theoretically, going to each of the to switch slots in the middle of the curve right, this, this, this was originally done we standard on this. 09:40:07 Originally, to deal with. As for the trigger path. This is the the data for the trigger path. 09:40:16 And, 09:40:16 and, but it seems clear that we can use it for more than that. 09:40:22 But the important thing is is that in order to use the VXS, you know, for a more flexible way, we need something in one of the switch spots to coordinate it all. 09:40:33 And that's where we have introduced the VX Fs trigger processor. 09:40:39 And here's the way where we hope to be able to relieve the rock of all of its of the heavy lifting tasks and implement them in the FPGA per day to transport. 09:40:55 What's nice about this is it triggered and streaming grayed out from all the payload modules in parallel. So it's much faster than trying to do it over the VME bus. 09:41:07 But it does require that the payload modules of some type of intelligence or program ability and the bill, and, and of course a connection to the bxs bus and the serial like capability. 09:41:21 So, so that they typically would have to be FPGA based, you know as payload boards. 09:41:30 Now, the software rock. 09:41:32 The traditional software rock now is just going to be primarily responsible for configuring and controlling and monitoring the BGP based, you know data acquisition system here. 09:41:44 So, looking a little bit closer at this vt board at the guts of it. 09:41:49 It's, it's made up of two FPGA as we have a zinc system on chip. 09:42:04 Chip here and then we have the vertex seven FPGA as well. And the idea here is, is that we can run a standard Linux OS on this, on the sink processor. 09:42:23 It's an it's a two core are with one gigabyte of DDR three memory available to it. 09:42:41 And, and the vertex seven. 09:42:33 Say sounds great and it's primarily responsible for receiving the serial data, the serial lanes, and it can receive it receives up to 616 payload porch. 09:42:45 A total of 64 stereo lanes come from the, from the VXS back plane. 09:42:50 And then we have an additional four QSFP ports on the front panel, which you see in the picture, and that provides that 16 additional lanes, which allow for external serial links, you know as inputs, or, or as potential output of, you know, in the original 09:43:12 model the trigger the trigger processing could be sent out, you know, via these QSFP ports. 09:43:21 So 09:43:26 the other thing about the VXS Craig now is it's a flexible platform. You know we are main module that we use the 250 megahertz flash ADC, that's us pretty much by all of the experimental halls and one for way or another. 09:43:44 That's the workhorse. And you see, here's an example of our streaming testbed in the lab where we have total of eight. 09:43:56 FAZC modules and Craig, and then we're your, you know, and we have them instrumented with, with some pulsar data and we'll talk about some of that testing in a little bit. 09:44:09 But you see, we also have additional modules. 09:44:12 So, in addition to supporting traditional VM the boards which you can plug in one of these crates. 09:44:19 If necessary, we have additional customer boards, we have another FPGA based board, which we call that set subsystem processor, it has a. 09:44:29 It has a QSFP inputs on the front panel of its module so if if we're bringing in external data, and it needs some additional processing before being sent to the BGP that's available. 09:44:41 We can also, we also for for a different experiment we just created this basically it's just an adapter board for QSFP to the bxs directs extends the four lanes to the BGP and a simple, you know, and provides you know external way to bring to bring in 09:45:15 customer electronics. One of these is the PD module going to be used for some Gen detectors. For upcoming experiments. Now, the I'll say that the streaming model that that we want to support you know so drew out on the original purpose of the, the access 09:45:27 which was that this was the trigger data path and so it was streaming trigger data and processing it and, but more immediate needs that a lot of the upcoming experiments, some of which you heard about want to start using and reading trigger data out of 09:45:45 the PTP as well since that's a much more efficient much more, much faster than be me. 09:45:54 So the idea here is that we're going to have a Standard Code rock, which can run software rock which will run on the Linux operating system, and will implement firm firmware and driver libraries allow the rock to control the FPGA event building a data 09:46:16 flow. And what you can see it is that the typical typically the bus to streaming and the trigger models are very similar. 09:46:21 So you have modules which sit in the the access back plane and, and they can send their data, either trigger data or streaming data through the through via the, the access through the vortex seven, and we aggregate it, and then we can send it out over 09:46:43 10 gigabit links, you know, from, from the. 09:46:49 There's, there's a processor. 09:46:53 and the data is not necessarily touched by the software rock anymore. Once things are flowing. 09:47:05 So here's some details of what the FPGA based readout controller looks like on the BGP, and now you got to sort of flip your look now we have the vertex seven sitting on the top 10, and the zinc on the bottom, so that the access payloads are here, this 09:47:23 is, this is the structure for the triggered readout but, but, as you can imagine, you know the streaming just replaces a couple of these blocks with doing something in firmware doing something slightly different. 09:47:37 So the original software rock sits on the processor, and we have hooks into registers to and and the users still allowed to be able to go in and access and configure the readout data banks, you know how, how they're going to read each of these payloads 09:47:54 slots and what data banks they're going to be put into, and those database can be then sent to the event IO event, the event builder, you know, which ties in the trigger information from the TI module in the crate. 09:48:13 And, and then at formats it in the right form to send it out over the 10 Gigabit Ethernet to the standard event builder. So these, these hardware. 09:48:24 Rock data can actually coexist with with other rocks in the in a DAC system that are using traditional readout and all will be built at the event builder, and they'll look, there's two streams of look exactly the same. 09:48:39 And for those folks who are looking forward to, to this support for BTV based triggered read out, and some of the experiments, all of this, we, we get we got this working just in the past week or two. 09:48:55 And, but of course testing and development continues. 09:49:01 Now wanting to move on to some, some other testing we were doing with the streaming. 09:49:17 Dac for in this in the same form factor in the VXS form factor here, we're using the f8 he sees and streaming that data from them. 09:49:21 And, and we are curious about looking at the TCP performance. 09:49:29 If we go back to the right here for the output we've been doing TCP connections, this this is of course necessary for triggered read it out but. 09:49:43 And, but we're also using it for streaming read out to connect the data so that the data streaming data comes through the FPGA through the ARM processor and and then the streaming, the streaming model allows for up to 410 gigabit links to, to come out 09:50:02 and, and you can you can match the data being streamed from, from subsets of f8 he sees uncertain uncertain 10 gigabit streams. 09:50:12 We were doing tests with to 10 gigabit streams, so there were four flash ADC is coming on one stream and and the other for flash etc. that data were regenerated on the other stream 09:50:26 that the. 09:50:32 What happens is is that the, that data frames the time slices are buffered here on the ARM processor before transmission. 09:50:45 And, because of the TCP connection, you know we have to have some buffering there, and we allow for for. 09:50:59 We had a, the pulsar data that were coming in was around a megahertz which basically corresponded about 620 megabytes per second, which is about 50% of the 10 gigabit bandwidth. 09:51:13 Data frames the RR 65 microseconds. Second time slices. So they, they're coming at at about 15 kilohertz. And they have about 42 kilobytes per, per frame. 09:51:27 At this rate, and so these two data streams were connected to to set to do a server, you know, two separate socket connections and. 09:51:41 But, but the connection between. That's the switch and the server was 100 gigabit connection so we're really not bottlenecks from a network standpoint, at all here. 09:51:55 So one of the things we noticed is that with with very little buffering, we get fairly regular frame drops, it's a very small percentage of the data but the fact is is that the the VTP had to drop frames, because the. 09:52:33 the TCP connection was was being inefficient and in some way. Now initially we discovered the big inefficiency because of that there was a bad optics module. And so we're getting a lot of errors on transmission. 09:52:38 So once we got rid of that, then we started to see, but we still saw some frame drops, and we saw an interesting structure to them. As we increase the buffer levels you see the frame shops, you know will will start to disappear and they'll be gaps where 09:52:55 there's no strange jobs for a while. 09:52:58 But even at a buffer level thousand. 09:53:02 We still saw a very few hiccups that would show up periodically. 09:53:08 And they're very short time period. 09:53:10 So there's there's clearly an inefficiency going on there. And so we decided that we started looking at this over the long term. And. 09:53:20 And so, you know, for many hours we see actually an interesting structure where you get these frame jobs coming at sort of a hard periodic. 09:53:32 Right. 09:53:35 And, but rather than trying to increase buffering, Ben was able to basically bump up the size of the TCP transmit buffer on the BGP, and by doubling that from 60 428 kilobytes. 09:53:52 Basically everything went away and was clean again. So, so there was, there's definitely a dependency on that. 09:54:05 And that transmit bye for now. 09:54:08 Just in general. 09:54:23 In, for further testing, we found that that we tend to max out the current TCP stack that we're using for the BGP at the between about seven and eight gigabits per second. 09:54:27 We can probably do better than this, there might be an alternate TCP stacks we can choose, but we want to. It would be nice to not have to, to put so much into resources like buffering to support this. 09:54:45 And from an FPGA perspective the most efficient way to transport streams over Ethernet, ideally would be UDP and jumbo frames for that matter. 09:54:55 And this is something we would definitely plan to investigate. As I said before, though, this is really not practical for the trigger model on, but but but the TCP 09:55:08 performances is certainly sufficient for experiments that requirements. 09:55:20 As we sit know them now. 09:55:23 I'm the VP by its very nature is a stream aggregation point right we have to aggregate streams and we still need to develop some standard methods for throwing the data way, are, are inhibiting all the strings synchronously at their source. 09:55:37 We currently in the streaming model we have a distributed sync signal from our trigger trigger clock system that we use to basically start all time, all the streams that at the beginning of a run. 09:55:46 So in theory, we could we could inhibit all the streams in some way. But we haven't implemented during the during a run, but we haven't implemented such a thing. 09:56:03 The current streaming tests that we have guaranteed sufficient bandwidth for the FCC to the BGP because we're only sending possums and times, and that's been limited, but, you know, we need to expand to allow for for more data from the flash etc, with 09:56:21 a full waveforms or other payload modules that may generate more more data. And so, we have to make sure we understand how it is we're going to potentially get rid of or or inhibit these data streams. 09:56:46 So, In the future, we're looking at other commercial options that we might be able to use as a substitute for the BGP and and we've recently acquired this Arista 7130 switch which is F, which is basically an F big FPGA in a box, and a pizza box. 09:57:05 Um, it's kind of like an IP BGP on steroids. It has a vertex ultra scale. 09:57:12 Nine p FPGA. It has 48 SSP ports on the front panel, and those ports can be mapped up to 60, different application ports directly on the FPGA. There's 32 gigabit bytes of RAM, and, and there's a CPU in this box as well, which has a j tag and a PCI access 09:57:37 to the FPGA. 09:57:41 There's also what's nice about it is there's available, already available vendor application support for things like port aggregation and high resolution timestamps and their development kits for full access to FPGA resources for for our custom applications. 09:57:59 And all the ports can support a standard 10 Gigabit Ethernet, or custom serial link protocols. So, the idea being that if we have a lot of custom front end, electronics, with the stereo offload. 09:58:15 This, this may be an interesting option for aggregation and processing to send to the backend servers for as as sort of standardized streams on on on the network. 09:58:31 So just to summarize this VXS platform is really provided us a reasonably long term solution to support the next generation experiments that are going to need this higher performance, you know, both at the front end both for triggered readout as well 09:58:51 as for future streaming support. 09:58:55 And we're doing this current transition from the Kota data acquisition systems the traditional software readout controller we now have this hybrid hardware accelerated application, and, and, and that's going to be, you know, something that we're going 09:59:11 to rely on for these experiments and a lot of the work ahead now involves, you know, making it really robust against whatever the front end electronics may try to throw at it. 09:59:27 fJ lab the nature of the very types of experiments and detectors here at the lab, really motivates our small electronics and computing support groups to look for these, you know, commercial solutions, as well as standardized software and firmware on to 09:59:43 help manage all of the data acquisition challenges that we have. Thank you. 09:59:50 All right, David, thank you very much for this interesting talk. So I'm coming to young who has his hand up in a minute I mean let me just comment on on the thing. 09:59:59 I mean, okay so you rediscovered the naval algorithm right i mean just, Yeah, it's windows right so. 10:00:06 But this is something like I don't think you made it. 10:00:09 Yeah. 10:00:12 You made it didn't. I mean you have your, your TCP stack on in the fabric right it's not running on the on on the Encore or something but it's running on the FPGA part, it's running on the FPGA That's correct. 10:00:24 Yeah, I mean this is pretty much what I, I think you found out that they don't actually do naval properly. 10:00:32 And I mean that's why you had too many the senses. But you had this other comment about maybe looking at UDP and and and stuff and so my advice is not to do this. 10:00:42 I mean, you know, generations of engineers have been working on making TCP water, what it is right and you have on the receiving end we have all the TCP implemented in in silicon right so if you go to UDP. 10:00:58 You will have to redo all that stuff so I mean my advice is to not not go go this way right i mean like I'd rather, I would just rather fix it TCP and on what on the FPGA certainly simplifies the FPGA. 10:01:12 Yes, I mean that's it if you have a stack I mean like that works. Yeah, I wouldn't look back really well so So one of the questions is making sure we understand how to get the max performance out of the stack though that that's, you know, yeah. 10:01:27 Okay, so let's go to young was hand up. Yeah. So, when you actually look at the TCP IP stack I mean, if you want to go to UDP, I, I would go a step further than right if you if you give up the promises TCP IP make it makes an end you have full control 10:01:46 over the network anyway right it's not routing over the internet. You can just forget layer three and layer two and then even at frames, right, that then you could rip out the app resolve, and all of us from the fabric and make it even smaller right that's 10:02:01 that's what really, I think, be of benefit you were brought a lot of stuff. You don't need just to send from one MAC address to the MAC address. Right, well so so one of the other thing yeah yeah i i agree with you. 10:02:16 One of the other things though that. 10:02:22 And I, and I refer to this from from the user friendly standpoint, right, I mean, ideally what you'd like to be able to do is, is, is provide the user at the other end, something that's, that's a single coherent large block of data and and and so, you 10:02:42 know, to what degree we do this and, you know, reassemble things and hardware, or whether we have to reassemble things and software. 10:02:54 I think is a question that we want, we'd have to look at and those types of situations. Yeah, I mean, suddenly TCP IP this this pipe modularized used to throw data in one side and it comes out in the same order on the other side right guaranteed. 10:03:06 It's very nice. Right. 10:03:10 That really is helpful for from the application perspective, actually on that. 10:03:15 On this this big block, it's not so much related to your talk, but maybe as a thought provoking statement for everybody. 10:03:24 I'm thinking more and more if he actually needs events at all, even on a higher level, or if we have an easier time but the model very say everything is timestamps. 10:03:37 Right. 10:03:38 I mean the stream model right all the tracks have a time associated with it by an MD be, do not detector wide events at all, do we actually need this, or is this just something we are just used to. 10:03:57 Right. Because in the end, we want something like, how many tracks that we see in this region of the detector in coincidence, subtract, in this region of the detector. 10:04:08 And I suggest that we have this discussion in your session I mean, yes, yes. 10:04:13 Fine, but I mean like you know yeah I mean like, I mean you heard him in my talks that be a kind of what I was doing, did I go into spray and yeah okay so let's talk about this because I think we will actually find ourselves doing this and you know when 10:04:29 the SEC is turning on and Jen quick question so we're running a little bit late. Okay. Quick question so it's a race to switch slides. And so, again, a very nice topic baby. 10:04:43 And so at least the spec sheets are just the. 10:04:50 Just give me a feeling that it looks so much similar like a police car that, you know, in a computer server like will let you know. 10:05:01 And, and he reminded me about my biological term called convergence evolution. 10:05:12 People find this very similar solution, even coming from different ends of the different ends of origin. And so my question is, so it's very interesting line, productive so could you comment on what is that roadmap for the solo switch, a PGA based PGA 10:05:25 basis ratio in the in the coming years, especially the way I see it. 10:05:46 Yep. So, so in our, in our conversations with the arrested engineers. They. So this is, this is actually the arrest of bought this company, which is called meta make, which was called meta Maiko, and they basically absorb them. 10:05:47 And that's one of the things that they were working on. 10:05:53 The plan is is that, of course, you know, here we have 10 gigabit all SFP plus 10 gigabit ports coming in, but they're going to expand offerings. They're going to provide the new, new models that have support for, for higher 10:06:16 bandwidth ports QSFP, you know, hundred gigabit type, and 1400 getting big, big capability. And so so that, I mean, so in that sense that that that's, they're heading in that direction. 10:06:31 But one of the things I thought was interesting is, is that this. 10:06:35 This also seems like a potential. 10:06:40 A way to, to, I was curious about, you know, the ability of highlight high resolution time stamping. 10:06:50 Now I don't think they're at the level of what we need at the front end in something, you know, like this but doesn't mean that it can't get there in some way. 10:07:00 So let's keep this in mind for for the afternoon we actually have to move on to thank you again very much so it was very interesting talk. 10:07:11 And so we are moving on to Mariska on girl MC streaming. Okay. 10:07:18 I saw you have thank you for uploading to talk, and take it away. 10:07:24 You see my screen. Yes. 10:07:27 Awesome. 10:07:29 Okay, so I'm in this darker will elaborate on some concept that I presented before, and just point out some of these concepts, actually implemented in code and some other are in development. 10:07:48 Let me remind the scope of the project, which is to have a simulated data stream on a network, which is indistinguishable from real data. 10:08:00 In other words, we have the source of data that the data subscriber and the data analyzer can digest, as if it's real data news, of course is very important to define and address challenges on the hardware communication and software issues related to, 10:08:25 you know, all the streaming protocols and analysis is system. 10:08:31 Okay so, since this is a time. 10:08:37 Streaming as a function of time, the time of the presentation of the signal is crucial, of course, and let me start with show how the signals are accumulated and the hardware electronics emulated. 10:08:55 First of all, let me show the definition of the electronic, which is aimed at collecting the junk for steps in needs as they are seen by the readout electronic. 10:09:10 So these involve the definition of the time window of the tonic which is different for each detector. And this is an example of this concept. For example, we have two tracks. 10:09:24 One they produce two circle. 10:09:33 Jam four steps on cell two and one track the dotted line would use a secondary track, and one also produce rights equal and one triangle. Now the sorry the suit the black circles are all within the readout electronics, so they are all collected in one 10:09:51 head. 10:09:55 But the triangle. Steps coming big later so we come in the next time window, without electronic. So there's four steps in reality, are associated with two different steps, different hits, as they will be in reality. 10:10:15 Now passing by since we're talking about heat definitions. 10:10:19 I want to add the share energy share mechanism. And this is relevant for detectors such as silicon vertex, or even when we have for example a simulators with to a PMT is 10:10:36 the hands of the simulators and the mechanism. 10:10:41 Create dynamically GM for stats based on the original GM for that property and, if necessary skill, the energy, the positives. So this is, in other words these emulate the scenery to PMT is given 10:11:01 one given from one john four steps or into silicon strips from one single john four steps. 10:11:20 So now that we have the correct time representation in retail, electronics, I want to show you one example of the voltage versus time emulation in that electronic and the specific example I'm showing the flesh, etc. 10:11:30 emulation. 10:11:33 And this is the chief buy the software using user defined functions that are as a function of time, and provide a signal that depends on the energy deposited each step. 10:11:47 On the left is such an example. 10:11:49 This is a track. 10:11:55 Going through simulator. 10:12:05 In the BIOS dots color coded by particles represent the energy deposited a different time. So each of the.is a gym four steps. 10:12:05 So these steps are convoluted with the user defined function. 10:12:12 All this function out there, added up, and this provide voltage versus time function in that video, electronic for that single job for identifier. 10:12:26 These looks like a big overlap in code but actually surprisingly it's not. And he's actually negative for compared to stuff like 10:12:37 the tracks beans when we magnetic fields, young for, etc. 10:12:43 In the next step is sampling. These function for example, for nanoseconds simply as the case of glass well flesh, etc. So this provide single ADC value as a function of time, every four number seconds. 10:13:16 left is typical 10:13:17 fleshy disease signal. These was for the kilometer. 10:13:35 Pause depends on the user defined function arm and the pedestal is also 10:13:43 emulated using the means sigma extracted from the real data at integrator over the single part where the signal is act on your presence so this is the base pedestal of the electronics. 10:14:04 Um, this was actually a success story because we use these flashy DC emulation with feedback in the FPGA simulator. And this way, we found the inefficiency. 10:14:17 That was present in data and was not understood but with simulation where it was found. And it was fixed so this was a success story of these simulation. 10:14:32 Now, we do have to collect these signals in buffer ready to stream that this is a kind of paradigm shift from normal john for simulation that are even centric. 10:14:52 So for this let me introduce the class object. 10:14:56 This is a streaming without unit that is Amer represented single artwork unit that stream data on the network. So this could be single flashes He bore BDPDC we do without board, anything that will be used, the buffer contains the signal from our channels, 10:15:21 possibly time whether you can contain WAV packets whole data or integrated via for example mode seven flesh etc and of course in the simulation. We need to include physics or electronic noise, either john for producer manager for Aqua data. 10:15:43 And the last steps, is we need to connect the GM for identifiers, to the streamer without unique electronic address, which is typically, for example created all channel, and this is done with the library that read the real transition table databases and 10:16:05 does exactly. So, for example, john four and five is typically string identifier, which is currently residing he type in john for. 10:16:18 And, for example, in the case of the chambers we have sector region layer and wireless three leaf chambers and these needs to be connected to the crates lot and channel of that particular wire. 10:16:32 So, This is a library that we use in class well, and this is an example of XML presentation of the class well simulated data hardware addresses. 10:16:47 In other words, we take the jam for sensitive identifier and converted to create slot channel and this was what was used to feed back in the FPGA simulator and successfully. 10:17:07 So, this is a illustrate the scope in a little more detail so that can represent the plan that we want to do with simulation so the simulation will provide one file with a buffer of data per second without unit. 10:17:27 And the reality of course will have several significant unit so the simulation will provide a collection of these files. 10:17:34 And as a reminder, this can be five representing flesh etc etc the leaderboard any hardware 10:17:44 presentation in the class object and disease to be fed to the data subscriber analyzer. 10:17:53 Now, each streaming without unit contains the buffer of data. 10:18:01 And this is the paradigm shift that I was talking about earlier, this is not a collection of GM for events, but it's, it's, as a function of absolute time. 10:18:11 So in order to address this. 10:18:14 There are a couple of things we need to do. 10:18:16 Number one, talk about event time dimension in jail for. 10:18:23 This is a typical gym for our event is a blob where we can have several tracks at each inside each at the different time within that event. 10:18:39 Now these events are to independently, and jam for in fact the multiplayer mode, they are sent to different thread. 10:18:50 They're kind of don't know the events. It's don't know about each other and this is what we want to address. And the first step is we introduce the event time window. 10:19:00 And within days event on window we have the detective editorial windows that I described before. 10:19:18 In coolest while we typically use event time window of 100 1500 second because this is our typical Dave chamber time window and also incumbents encompass the other detectors time window. 10:19:27 We also use these time window to simulate physics background. 10:19:34 In that case we have for 10 to 35 luminosity which is a typical class 12 luminosity. We have the beam is 130,000, electrons are bouncing for nanosecond buckets, so this is about 6263 buckets very very event. 10:19:54 These of course are can put us. 10:19:58 that will end up in the detector. And this is, in addition to the physics event. So what does this by us. Well, now we have enough to one mapping between absolutely screen time, and even number. 10:20:18 In other words, event number five. If the time when if the event time window is 215 on the second will start that one like a second. 10:20:26 I mean no event number 10 was started to microseconds. 10:20:31 So in this, now we have a link between streaming time. And what happens within a single event. 10:20:43 In this particular example we are physics event at absolute times di that are can happen within a single event that each single event is associated with also. 10:20:55 It's been bunch. 10:20:59 But we still have one problem. 10:21:02 And we're still limited by Jim for even sent me for more in that. 10:21:06 The eats from one events could spill out and belong to the same buffer, or the same stigma we use the buffer as the subsequent event. 10:21:20 In other words, we can have a situation like this we have a gym for event time with all the tracks and, and the secondaries. 10:21:32 And based on the time evolution is by the way includes for example stuff like cable delays or, you know, time evolution of the signal or anytime additional time delay introducing the digitization. 10:21:47 All of these can extend past the john for event time window. 10:21:55 But we do need to collect them in the student leader you need buffer. 10:22:00 And also, the times of all these additional steps can end up in the next event. JOHN for event. And so we need to also collect all of those. 10:22:15 And in addition, we also have the fact that GM four cents each event in the different tried. 10:22:25 So this is supposed to kind of shift to the left, I will make a better block next time but as a function of time. 10:22:33 JOHN for. 10:22:35 To add a second one event and would you say digitization. And then this garden and then set the next event. 10:22:43 The next event. But now with the concept of event time. 10:22:48 And with the time of illusion of the hits within each event, we can make this collection. 10:22:55 We can keep the stigma unit object in memory and accumulate the signals in each relevant channel, and keep it for the necessary time before putting into desk. 10:23:17 So, as a function of time will have a situation like this, where we have to really not leave behind any it's and make sure that the red line which is the line where 10:23:32 there is no more. It's left from the gym formatting, are all all all the old days behind designer, already within that to disk. 10:23:47 And the present buffer accumulate event until it is time to look into this. 10:23:53 And of course the next buffer will also accumulate the event so we have, in other words, a train of buffers. 10:24:01 That is the bag owner dynamically created, and we're not not needed anymore there. 10:24:09 We tend to disk 10:24:13 and do my last slide I kind of 10:24:17 want to summarize what I just described. 10:24:23 And we have 10:24:26 the mechanism of to accumulate multiple events for example the first eight events here which has the boxes on the top of accumulating in the stimulator you need a buffer number one, which also will do stuff like what happened in reality, for example, 10:24:46 time over there, it's, etc. 10:24:50 But the last two of these first eight, they're all also will go in the stream without buffer to. 10:24:58 Now, for an example of these. 10:25:04 That time size of the buffer for the class was tested was done that last spring was 65 microseconds. 10:25:12 These four BGP represents about 70 hits, I think, for each of our photos so so it's, it is a very manageable. 10:25:24 So, in other words, now we have each form a jam for events that can end up on different buffers, depending on propagation time. 10:25:37 And also the pilots are intrinsically scenario because the time will evolution. It's a take into account by without a tonic and water virtual style. 10:25:52 I'm. 10:26:07 So, um, this is my last slide as a summary and outlook so a lot of the concepts that I described are actually existing in code using for cluster in solid. They're the models are kind of there, dedicated to that framework, but the not out there been rewritten. 10:26:16 And so that it could be used in any c++ engine for simulation, and in particular distributed out unit is a work in progress and it's, I think it will be using any c++ simulation. 10:26:34 For the next step for us would be to 10:26:38 use a simple existing attacker geometry for example what was done last spring and emulate that test. 10:26:46 And that was a single b2b but the thing is a good proof of concept that this is possible. 10:26:55 After we have this we can add multiple similar units for example multiple grades and simultaneous buffer stream arm, and this will serve as a ground to 10:27:12 a battleground for simulate the challenges of large scale detectors. 10:27:17 Laughing security so network leeches of what happened if 10:27:22 you know some board dies, large amount of data malfunction or timing in respect to senior shapes. 10:27:31 I didn't talk about this live because no work was done in that respect about event generator in steam without. In other words, going from generate those that 10:27:43 are time and not even base generators. 10:27:52 Thank you, that was my last slide. 10:27:56 All right. Thank you. 10:27:57 Very interesting so we can simulate this one. 10:28:12 That's good. So Jen I see you have your hand up. I just forgot to lower it. Sorry about that. Okay, and so David. 10:28:17 David Lawrence, go ahead. 10:28:19 Emory so that's a nice talk. I'm just kind of curious Have you done. Maybe this is a lot of work that have you done anything to compare running the simulation this way with say the nominal event based simulation that you do with class 12 and see that 10:28:35 you get kind of the same answers out. 10:28:39 So, This is part of the next step. 10:28:42 So that part about 10:28:47 collecting buffer of data versus, you know, that span multiple events is not done yet, so that would be the next step and then of course we need to compare what was what was done. 10:29:05 Okay, thanks. 10:29:11 Any other questions, I don't see any other further answer. 10:29:11 Okay, if not so thank you again very, very, very much. And so we are almost back in time is good. 10:29:20 So, the next talk by Eastern Time Stony Brook. 10:29:26 So we are going to a different place in Europe now so with the current status and outlook for TX. So, Ethan either please share your screen and take it away. 10:29:37 Yes, I am here. Can you see, can you hear me you see my screen and all that, yes on both excellent, excellent. Okay. So, yes, a little bit of a little bit of globe trotting. 10:29:51 So my name is Ethan client, I'm a postdoc at Stony Brook and I'm here today to talk about the current status and future outlook of TX, specifically in relation to streaming readout. 10:30:08 So to get started, I am going to focus, or just give a brief introduction to TX and what it aims to measure and the questions it hopes to answer. 10:30:21 So TPX stands for the two photon exchange experiment, so I'm sure many of you in the audience are familiar with the proton form factor discrepancy. I'm showing here the difference between the Rosenbluth measurements and the polarization transfer measurements 10:30:41 of MUJI over GM, and this is a well known discrepancy. 10:30:47 The ratio is approximately one when measured with the Rosenbluth technique, and it's a decreasing linearly when you use the polarization transfer technique. 10:30:59 So there have the leading explanation for what causes this discrepancy is the two photon exchange effect. And that's been measured by three recent experiments VIP three class at j lab and most recently by Olympus Daisy, and the those experiments measured 10:31:21 the two photon exchange and found it to be small, in the region that they measured it existed but it's small. 10:31:28 But if you look on the the figure where I'm showing these arrows, they cover a limited range of Q squared. 10:31:36 And they cover a range of Q squared where the two photon exchange effect would be small. If it existed. So the logical thing to do is exchange extends the kinematic range and attempts to measure the two photon exchange effects at a larger q squared somewhere 10:31:51 between two and six gv Percy squared. And so that's what TPX aims to do is, is measured the two photon exchange at at a larger kinematic range. 10:32:05 And it aims to do that at Daisy, so it's often called the the spiritual successor to Olympus. There are several people from the Olympus experiment on TPX helping helping guide design and get the experiment up and running. 10:32:20 I'm showing here just an aerial view of Daisy, and you can see the large Petra three ring. And then the, the smaller dizzy to ring, where TPX has had a few test beams in the past, and I'll be talking about the results of those tests beams and what was 10:32:39 measured. 10:32:41 Okay. 10:32:43 This is a zoomed in view of the daisy to and the positron and electron beam that they have in their ring. 10:32:55 And they have at their various test beams t 2122 24 and 24 slash one, the different extracted being lines where you can get these positrons and electrons out. 10:33:07 And of course if you're doing a two photon exchange measurements you're comparing the ratio of electron electron proton and positron proton scattering, so you need both particle species in your beam in order to do this measurement. 10:33:21 So few years ago, actually, before I joined TX to 2019 or 18 I don't remember exactly when Time flies in lockdown. 10:33:35 But there was a beam time and TPX did some test measurements in this T 24 slash one experimental area. So a little parasitic Lee behind this T 24 area. 10:33:57 And there was a detailed work on comparing and contrasting the efficacy of traditional triggered readout versus streaming readout for the experiment. 10:34:04 So in this T 24 slash one home. 10:34:07 There was a calorie monitor array that we built, and I'm showing you a top view on the right side of the slide in the figure. 10:34:18 And it's nine two by two by 20 centimeter led tongue state crystals and you can see on the top three of them so this is actually just a three by three array. 10:34:30 This is part of the setup. 10:34:32 The There's your hv readout cables and such on on either side, you can see these copper pipes coming out for cooling. 10:34:42 And then the edge of the calendar emitters all the way there on the right. 10:34:46 So they were wrapped with white tie back and aluminum foil to make sure that they're like tight and their Hamamatsu are 116 6pm tease powered by Lacroix 1461 and modules. 10:35:02 So, that's the physical setup. 10:35:06 And here I'm showing you two figures of this counter perimeter assembly in the beam line. 10:35:13 So on the left, again this is the assembly uncovered but we have this red cross hair here from from the laser alignment system. And this was put in place in order to make sure we know the relative positions of the perimeter blocks to each other, which 10:35:30 calor emitter block is actually positioned in the beam in the test Hall, and all those things necessary to actually do data analysis at a later stage. 10:35:41 On the right, I'm showing you a side view of the the calorie meter assembly here. 10:35:46 You can see again this copper cooling attached to the, the, the cooling assembly, and then most importantly is that to take away from this figure is these triggers detectors. 10:36:01 So as I said, we're interested in comparing and contrasting how well it triggered readout performs versus a streaming read up for our particular setup. 10:36:10 So that means you actually need some type of common trigger between the two systems. And for us, what we used was two pairs of trigger centimeters. So on the left, there are two trigger center leaders with some minimal overlap between them one slightly 10:36:28 upstream of the other. And then on the right we have yet another pair of triggered simulators, with only a little bit of overlap. And on the far left of the picture near this yellow block is the beam entrance into the experimental Hall. 10:36:44 So you have your beam come down and hopefully the electrons will make it through these, these triggers since leaders into the calorie amateur and the overlap between the triggers ventilators makes a very small square. 10:36:58 When projected onto the front face of the calendar assembly, so hopefully we're only triggering on a single bar at a time in our assembly. 10:37:08 So that's that's more of the experimental setup. So now, just to briefly discuss the DAC electronics and detector mapping that that was used in this particular test beam. 10:37:20 We had both a treat triggered and streaming DAC for the trigger DAC, we had a KNV 792 QDC, which, you know, as a qdc measures energy, but it also records the CPU time when information was sent to the the the readout computer. 10:37:41 And we also have a streaming deck, which was the based on a KNV 1725 digitize which recorded not only energy, but also we form information and more precise timing information as well as the common CPU time between the triggered and streaming DAC and Wendy. 10:38:01 This information was recorded in the computer. 10:38:05 So those of you familiar with this particular brand of qdc will know it has several channels, we only use nine of them for our readout because that's how many crystals we had the digitize were unfortunately only had eight channels, so we couldn't read 10:38:20 out all nine crystals. That's why we have this dashed line in the top right. And because we want to compare and contrast it triggered and streaming readout. 10:38:29 We actually want to feed in the trigger signal to our digitize your so we have that information in in our data analysis done later. So this bottom left crystal was also not read out, and instead in that channel was the trigger. 10:38:46 And the center crystal is number four highlighted in red. In both cases, most of the figures that we show are comparing crystal for and Channel, Channel Four, just because it's a central crystal, so it you know you can contain the energy shower in the 10:39:02 adjacent crystals and do energy summation and things like that, as you typically do with Calla perimeter are raised like this. 10:39:12 So when the data was taken it was being matched at 5.2 GV beam energy. And then after the game was matched to that, that we took more data and more spectra at 234 and five gv, and I'm showing you on the left and right here. 10:39:30 This example spectra is read out by the Q DC and is read out by the digitize you're on the right. 10:39:36 So both are reading out crystal for and then also when the digitize your you have to manually put in this coincidence cut on Channel zero, which again as a reminder, is the trigger. 10:39:49 So you see that overall in the main peak shapes are generally the same although relative peak heights are different, and the digitize or has more of a tail leading down to lower energy, and I'll get into more of that in just a moment, but this is some 10:40:05 some some of the results from the data analysis, and from the data we collected during this test beam. Several years ago. 10:40:15 So when we want to talk about the advantage of streaming readout which I think most of us here are fairly convinced that there is an advantage to streaming readout. 10:40:23 We have these limitations, from RQDC there's dead time you have your trigger signal size, and the fact that your electron has to hit the the trigger detectors in this given time window and all this, it causes the Q DC to see fewer events overall, then 10:40:42 the digitized right and this is it makes sense that this is true, given all the restrictions that exist and triggered qdc readout that aren't present with our digitized. 10:40:55 So we wanted to select all events, produced by the electron beam that might be seen by the digitize or that aren't seen by the QDC. So we use this timestamp information that I already mentioned is available, and we can select coincidence events, between 10:41:11 our trigger signal and the other crystals within the digitized or in offline analysis, and we can use these selected events to determine coincidence or time offsets between these crystals and the trigger, and we can use these time offsets on the original 10:41:28 data to see how many events we can actually get out of our digitized or So for an example, I'm just going to show you in a selected data set for a to gv energy electron beam on crystal for. 10:41:42 There we go. 10:41:45 Alright so here is this energy spectra selected at one particular energy to Guv for this this channel for and you can see there's about 20,000 entries in this particular histogram. 10:41:57 And this is events in Channel Four, which had at least six more events within the time offset range that we specified in either. 10:42:07 Other crystals or in the trigger signal in the same time the Q DC which can only read out when it has a trigger recorded about 9000 triggers. So the digitize are with with cuts on on timing offsets and other crystals and in addition to the trigger signal 10:42:24 was able to see about a factor of two more events then could be seen in the queue DC under similar conditions, simply because the digitizing has more access to information than the traditional triggered readout does in the queue etc. 10:42:38 so that was a really nice thing to see. And a clear demonstration of the improvement of the streaming readout over the trigger breathe out. In this particular configuration. 10:42:49 So now we want to do a more detailed comparison by of identifying qdc events in the digitize your data so we can directly analyze both of these and compare, as I'll show later to some simulation. 10:43:04 So we start off with our trigger and Channel Four coincidence pre selected data. And it turns out that the number of recorded triggers in the digitize that were actually seen in channels zero is about 4% smaller than in the queue DC. 10:43:22 And this is because there were a few digitize your events that have this conversion not finished error. So we had to throw those out as this is just a readout problem and we couldn't use those events if the full digitization isn't recorded, or completed 10:43:36 in time. 10:43:38 This causes. This makes a direct comparison of RQDC and digitize your data to be fairly difficult. 10:43:45 As we have to throw out these events. So how do we align the events between the qdc and the digitize your. And the idea was to calculate time intervals between event I an event i plus one in the queue DC and in the digitized and compare those and use 10:44:04 that to align the events. And I'm showing you here. 10:44:11 The exactly that the time in index is minus i plus one in the queue dc in the digitized and you can see this is the uncorrected time just the raw output that you get from these two different readouts. 10:44:22 Excuse me and the light green points are the digitize your time differences the blue points are the Q DC and you can see there's no overlap between, between these data points they're there they're seeing different time differences between them. 10:44:38 So when we apply the actual correction to look at the time differences. 10:44:44 Actually, aligning the time differences to make sure that that we're looking at the appropriate events. You can see in this next figure once we've applied this correction excellent agreements between the time of event I and minus i plus one, and nice 10:45:00 agreement between the qdc and the digitized. And I guess I should mention just for clarity, what what you're seeing on the left here is the the full index of all the events and the run and then all of the the time differences of index i minus i plus one, 10:45:16 and the right panel is just a vertically zoomed in snippet of this left panel just to make it a little bit clear as to what you're actually seeing. 10:45:27 Okay, so we applied this time correction and we see that the the time between subsequent events within the two readouts can be aligned and we see excellent agreement there. 10:45:38 So then the next step is to actually look at the energy is recorded between the two different readout systems. 10:45:47 And when we just look even after aligning these times when we look at the charge recorded by the CDC and the charge recorded by the digitize or, we see that things are still uncorrelated, we're just seeing a blob we're not seeing a nice linear correlation 10:46:02 between the two. 10:46:03 So, the question is what's going on. Why, why are we just seeing blob instead of a linear correlation. 10:46:10 Well it turns out after some detailed analysis that the Q DC was not being cleared at the beginning of each run. So you were seeing a few junk events, at the very beginning, which really ruin the correlation. 10:46:25 And when you throw out those first 30 to 40 QDC events, of course this is a run by run problem. So it varies on the run, how many throw out, but you throw out those events, move all the energies up in the readout so that they still prompt they properly 10:46:41 align with the digitize or you see this nice correlation between the qdc and the digitize your this knife sharp correlation and energy. 10:46:51 So, this is a really nice result and it shows that we can actually begin to compare the these two readouts to our simulation and see how well they agree. 10:47:03 So as simulation john for simulation was written, which took the the electron being that we know we're getting from from Daisy, and propagates that being through the T 24 experimental Hall into the T 24 slash one hall where TX was taking test data, and 10:47:24 you do all of your standard simulation techniques you scale the peak you shift axes and such, to overlay this simulation with the data and this is all still very preliminary analysis. 10:47:41 But the JSON for simulation has a quite nice agreement between the CDC data, and also the the JSON for simulation and the digitize your data. 10:47:47 So we're fairly happy with this as a first pass of a comparison between simulation and data, and we're happy that we've seen a nice correlation between the qdc events and the digitize your events, both in the energy readout and the timing readout that 10:48:02 we were able to determine 10:48:20 the overall outlook from the analysis that has been done, is that we've, we've had these triggered and streaming readouts schemes that were used in this test been for three by three calor amateur and with the offline analysis we demonstrated that the 10:48:23 streaming readout has significant advantages compared to traditional triggered readouts that you might use in an experiment such as this with standard Qd seems so when we were correlating the CDC and digitize your data. 10:48:36 We showed nice agreements in the energy deposition and relatively good agreements between this measure data and our simulations. In the next beam time we're going to test a five by five calorie monitor array at the dz testing facility. 10:48:52 And in addition to testing this this larger caliber perimeter. We're also going to be working on different streaming readout electronics and seeing how well they compare with each other and contrasting those results. 10:49:06 So just as an example of things we might test. There's the DRS board from psi and I'm showing a figure of that here, this has been discussed in in other workshops, but it has a four channel input as well as external trigger in and out, and a clock input 10:49:23 and output. And with these four channels you can simultaneously measure point 725 Giga samples per second with 1024 sampling points, each with a retail rate of about 500 events per second, and this whole thing is run by Xilinx Spartan three FPGA so this 10:49:41 this is one of the possible readout cards that were we might test at our future test beam, whenever that that happens, you know coronavirus considerations, of course. 10:49:55 And then there's also the IINFN wave Board shown here, this has been discussed at other streaming readout workshops as well. I include the link to the presentation I borrowed this picture from. 10:50:07 And this is all powered by that zinc mezzanine card, kind of in the center of this figure here. 10:50:14 There are a lot of advantages to the INF and we've bored with a selectable gain onboard POS to generate distribute clocks and of course you can take in external clocks and reference signals. 10:50:25 This particular board has 12 channels compared to the four channel with the DRS board from psi 1214 bit resolution at 250 megahertz, and has a wide variety of different inputs and outputs for high and low speed communication is USB seta SFPGBE all of 10:50:44 these things are available for this I N fn we've board. 10:50:49 So that kind of brings me to my summary and the the status of TX and the reason being times, we hope, once we funding is is finalized and, and our access to the experimental Hall is more formalized will be able to measure at two and three gv been energies, 10:51:09 and then maybe someday we could extend that reach to four but that's a separate topic. 10:51:16 Currently we're just aiming for two and three GeV and we can really shed some light on this proton form factor discrepancy and have a high quality measurements of the two photon exchange. 10:51:29 So with that I wrap up and say thank you and ask if there are any questions. 10:51:38 Thank you very much. This was a very interesting talk I always liked this aerial picture of DC so I mean, many many years I was a student there for some time. 10:51:44 They were actually still hunting for the top Quark and stuff so actually takes me back he has this very nice. 10:51:50 I got a question I don't see any immediate hands here but I mean I got a question about this. 10:51:56 The, the tail that you showed in the comparison between the two readout methods have to see simpler right you have the this one here right we have the tail. 10:52:06 So, you have before you had the coincidence, I mean on the left side right to make a coincidence between these two trigger 10:52:16 triggers into simulators basically to make the trigger, right, and then on the right you just record, whatever it is, right. Yeah, I was just wondering I mean like how you did this, and one child delays them and so together or how did you actually makes 10:52:32 us. In order to recognize a trigger. 10:52:34 I'm sorry in what in the, in the middle right on the right side. I mean like when you put you put the trigger simulators, on one of the channels of the digital either. 10:52:45 Yes, right. And did you delays him that you have both signals. 10:52:50 Basically, on the waveform or how did you do this, the actual delay so that was that involved looking, figuring out the time offsets that I described a little bit later, to make sure that there is actually was coincidence with channels euro and and and 10:53:07 crystal for. So when we were actually figuring out these time offsets in order to determine the coincidence, between the the trigger signal and the crystals. 10:53:16 I think I might ask him if he's still here in the audience to give maybe a little bit more in depth explanation of exactly what was done there. 10:53:26 He did seven i can i can okay so on the hardware side, we built standard trigger is essentially a min logic. 10:53:37 And that goes into the queue DC as a, as a gate, so all the crystal signals actually delayed for that long cables. 10:53:45 And that signal also goes to the digitized. 10:53:50 And I think we had the cables before we split. 10:53:54 I'm not sure I think we actually had the cables, after we split. 10:53:58 Yeah, okay. Now, what I'm getting at, what I'm getting at right i mean i would just okay so you need to trigger the 1725 to write some economic just to make make triggers. 10:54:07 But then, then I would just put the, I mean I just make an analog some of the two simulators and puts away from on on my channel zero that you showed right i mean so that's a little bit spaced apart in time right so then you can do all your soft. 10:54:39 know, so we have a busy game in the trigger, and I wanted to have really, the other one triggered or not. So that's my be essentially the the digital signal on that channel, I see, okay, and then we have the time offset the, I mean do you have to figure 10:54:42 that out once minutes, static like cable offset. 10:54:47 Let me also comment right i mean accept Stefan that's def hundred right who makes the DRS for shipping begin with and also makes us revelation board right so he is about done with making this, what do they call it a dream weaver or something like this. 10:55:03 So basically, this has 60 channels or even more. It's sort of like the thing that's not actually read out to you as be anymore but through isn't it. 10:55:14 And so it's sort of like to the next version of this thing. And probably was a pandemic going on, you might actually be in the time range that you can actually buy one of those guys. 10:55:24 Okay, now for LogMeIn they have at least 16 channels or something like this and the much faster than 500 bucks we got the last time I talked to him he essentially said well you should talk to. 10:55:35 So, they are trying to outsource this to company, I see, okay, and then we should talk to them, but there's, I mean there's also kn bought based on legit. 10:55:45 Right. The yeah the 24 right 1724. Yeah. Yes. 10:55:50 Yeah. The problem with these. Yeah. Okay, as. 10:55:56 I mean, some of the problems we had in that readout is, is the documentation okay and there's just some, I mean, the digitize i right if I read it out, it, and I can either, so I, this is one problem I actually wanted to just save, whatever it out and 10:56:13 then decode it later. Yeah. 10:56:15 That doesn't work so easily. Oh, actually this is exactly how my plugin and our setup works. This is exactly what I do, I do not do any processing it just gets away from dump it. 10:56:27 Okay, so I think we should actually talk, talk about this because. 10:56:32 But do you use the library to decode the waveform first right and then you said, No, I just get the, there's some API basically to just get the raw waveform and dump it on. 10:56:44 Yeah but but but exactly that's what I'm doing I get the wrong way from, so I asked the API to give me the Robert Palmer campus. 10:56:52 But actually I wanted to just to save what actually comes over the VME. 10:56:56 Yeah, but that's what that's what the roadway for, I mean it's not the process bread from the two different API's right. So you basically get the thing that this, like in the manual it has sort of like a very complex structure basically that will have 10:57:10 to be called yourself later. 10:57:11 Okay, what am I think we are boring people here so let's, um, you know. Okay so, um, and we can we can talk about this and just a final point for me, I give the DRS board and the IMF and we've board just as examples. 10:57:27 I'm not saying we're locked into only testing those two. Yeah. Okay so Douglas has his hand up. 10:57:35 Yeah, I just wanted to clarify, two things. One, the three by three tests was done in September of, 2019, and we're hoping to do the five by five test. 10:57:50 This fall, the daisy test beam is operational with covert restrictions in place, masks to people at a time. 10:58:00 But we're hoping to do the five by five in the fall, and we can read out three, we can test three systems, and in parallel. 10:58:12 You know triggered this DRS board or the wave board and the cane digitizes, we could do them all in parallel. 10:58:24 Just in case anyone's interested in helping out coming today's. 10:58:29 I could give you a data acquisition that just works just kidding. Okay. 10:58:34 Okay. Good. Seeing no more questions so even Thank you very much. 10:58:42 And that was very interesting. And you for having me, are coming, almost to the end. 10:58:49 At the end, I mean like so we have 28 people here. So, for some of us It's getting late. 10:58:56 What I would like to do that they have been doing the last time that at the end of next talk everybody just switches on the camera and we make a quick screenshot of the thing that we have something for posterity. 10:59:10 And everybody smiles and, but just prepare yourself or turn your camera on and I had a couple of cool screenshots. 10:59:18 And this brings us to the last talk and this is a little bit more to the front end of the data acquisition. So this is by Damien naret and so he's talking about this, but I think obviously the successor to sample the new leadership for meds and screaming 10:59:39 So, Damon if you are there, please share your screen. I can hear you. Okay, let me. Yes we can hear you. Great. Okay, very good. So let me share my screen. 11:00:05 Always never stood for what. 10:59:57 Okay, share, and. 11:00:01 Yes. Is it okay. Okay, so let me go to full screen. 11:00:09 Yes, yes. Okay. 11:00:12 Okay, very good. 11:00:14 So thank you to let me the opportunity to present to this project so it's very a new project to develop or cheap, mostly dedicated to mpg but not only. 11:00:36 up. So let me first motivation of the project. So the idea here is to develop, you know, cheap, mostly dedicated to mpg demon, but only in the mockups Zac project. 11:01:02 we could also try to read most of our cannot detect on cannot, cannot automate offered on the desktop or we've really speech specific specific constrained particular for political did because single novella time was solution but this is an option I mean 11:01:18 that we are not sure what will we do something but it's something we have in mind. 11:01:25 we are not sure what will we do something but it's something we have in mind. And, of course, so yes it will be cheaper dedicated for, I mean, edited to read out which will be produced for dinner for now so the future future prediction ticklish even able 11:01:39 to feature. 11:01:40 So it's a combination of the solo University we've all sort of artery which are associated to this university, which will know because we developed some some cheap. 11:01:54 And so the CSS 11:02:12 Institute will develop content chips, which are went on after I get the name you probably other with them, and also have again of cheap in particular we can mention the SMP DC, which was really, really good type of solution. 11:02:13 So it's video creamy I projects, although we are still have to do an event where we play defines a specific assurance. 11:02:28 For visa cheap and we interact. We've detected. 11:02:30 But you know you're the six. 11:02:48 ocean Lisa detector whoops. 11:02:49 We also doing preliminary studies on the possibility to have the cheap. 11:03:06 Internet switch off the blogs and also in the bottom point will be what technology will use between poverty between one of the 14 nanometer take the ratios are more Western 164 millimeter, but it's also an open question. 11:03:13 We have in mind. 11:03:16 The chip, the chip architecture, but of course it's really very very preliminary maturity could be 11:03:23 beautiful former 1414 stage. 11:03:41 We keep Joshua amplify your 11:03:34 internal digitization. 11:03:41 She channel acumen DSP which will lead all channels, and to some that achievement I would come back on stage and treatment to send our visitor to the swimming readout to the action. 11:03:56 So, of course this is really for 11:04:01 follow up question is really really putting me down. 11:04:05 Here we go into details concerning the different part of the cheaper to computer what would be the specification we have in mind. And what else was open question on them. 11:04:34 So, co signing of hope that path, we, we plan to to to have a limited number of channels, you've got 32 or 64 compelled to move or cheap which may have the moral modules, but because we is I do it so so to keep the size of the chips more know the limits 11:04:43 Not to limits a book as a power prediction book assumption and also to have better in production. So it's something when we appear we will will do some something limited in terms of number of channels. 11:05:04 It will be it would have a time for lunch we are planning to go to 50 between 50 to turn on a single day, between time, but we from the discussion we have with particular also mentor us to go to La Jolla containment so it's something we, and also a lot 11:05:24 of other people in terms of, and of course it's something we, we have to sit 11:05:30 as it would be different Kingdom detectors with different size of signals, we also have to plan to put it to the lunch for small school the signal coming for us and for for them detectors, up to luxurious enough cinema, coming from taker, for instance, 11:05:51 because having to input capacitance is so, so important question. 11:05:56 We because the interesting day we plan to read. For instance, to PC, which will have a picture of the note we follow Hello, look, low capacity facility code, and also loud Tyco so whichever, much larger high capacity and for the moment we plan to have 11:06:16 a maximum capacity also to Article five but this could be interesting to increase this video so we have to have to, to think of it, that would be a possibility to have also internally stimulation to a specific to the spiritual Mother Teresa's failure, 11:06:34 something which we just didn't study in which latrines, which is optional. 11:06:41 So, we have a civil thing to to study, to see what would be possible. 11:06:50 In terms of performance, nice figure of assets to detect opposite capacitors here, we are speaking them again. 11:06:52 So something we, which is under study on the week following football, and so on, but we also to clarify so he paid extra money paid to research you will be able to read. 11:07:15 And, which could be built on a particular for some application. 11:07:21 For instance, the usage of minute inequality PC method of cycles detection, to the teacher. 11:07:29 We need to divert. 11:07:34 And yes and bottom 2.2 so it's time to recover from situation and week. mpg may have sparks and of course we have to, to understand our own years. 11:07:52 Chip will be unavailable win situation. 11:07:54 Closer than sending the digital path to the DD the digitization for the event we plan to over quickly iterate. In terms of digitization between 10 to 14 years but we are also some request whoever lateral. 11:08:07 If we can see, I mean something light, and the, we have to study which would be possible which is not clear yet. If, to which love you We want to go, 11:08:22 actually says, and also the end, causing damage so we plan to have a quite precise allotted the damage with VCs have to be 12 Beach Police not possible to go to LA to venues. 11:08:36 Of course we are terms of condition we have an excellent cloud can maybe input possibility will be to to add to DC or, in addition to the, to the flesh etc but it's a assumption that we you know we did have to see physically or something. 11:08:57 Wait, which is required by your virginity calms 11:09:03 US Open to have some detailed treatment on the cheap. 11:09:23 Basic database, which would be a classical pieces which picture and government collection, which could be an important feature for such a cheap and of course they also question to in order to reduce the definite flips, but we we blend also to to use a 11:09:28 possibility to have but we still have 23 validity tokens to know what would be necessary or interesting for for them. 11:09:48 We want to keep it for the ability to have a trigger mode for test or such application that on the simulator. But of course, this up his family for cues tools coming out 11:10:05 a little so hoping to hear. 11:10:10 We have not yet defined what will be the maximum that we ever soon so interaction with the hoops. 11:10:35 From visitation we try to figure out what will be a pledge, the flux which is a expected is not to clear yet but of course we have to scale the bandwidth of the output of the cheaper, according to that will give you also some machine it is that which 11:10:43 are important. 11:10:44 So as I say the we plan to have a small day to develop one centimeter square, you know job invert know to ever something 11:10:57 In order to ever something quite cheap and easy to produce the technology which is a serial open question and we have to investigate on that what would be the best one to use. 11:11:10 But the idea here is to maintain the consummation is about consumption to live in a 10 to 15 minute meeting with channel which is something you usually folks such a jeeps and which will be quite easy to. 11:11:25 I mean to quarter to quarter. 11:11:27 If the GPS integrated inside the city ticked off so close but truth basically ticked off. 11:11:35 We have we are still some open question concerning, which will be the temperature on what to be so addition the veil, but 12 ac We don't expect too much problem too many problems fail so we something to study but it's not. 11:11:54 So if the prospect So Father What is really. 11:11:59 These projects is failure to primary stage. 11:12:03 We still have to parties who was a specific issue. And we, we lose studies in order to, 11:12:14 to improve I mean to developer to be so cheap actually take two on the 3d models of the chips and as explained without 70% of us just to open. I give here thought it kind of is predicted. 11:12:29 We try to make it fast. In order to be compatible with the timeline of the project in particular for the next two critical decisions. 11:12:44 Susie add to the show. As a parent, we think we are and so we to do preliminary studies which would take something like fear. And after we plan to have a development time and put the. 11:13:01 in order to be compatible revisit the project and leftover to put a producer under the foot pollution could be an intern for the photo yes the project. 11:13:21 So as I said, the circle of community will still pull through and see exactly so it's Fallout is feeling for multiple collaboration, but we are in the process to to formalize Mr collaboration. 11:13:36 We have to follow into the we we are investigating know we can feed in finance with this project. 11:13:45 We were we have several possibility but we, we need to be investigated and I would have to save it to some particular version if people are interested to participate. 11:14:10 So, in this case, not easy to to to contact us over Twitter tool for me. I thank you for your attention. 11:14:15 and to delete it if you have questions. 11:14:18 Okay so, thank you, Damian. I said, You said that you had a little bit shorter talk so we have plenty, plenty of time. 11:14:27 I don't see any immediate hence, coming up but I mean like you know i mean okay so here's Fernando, please go ahead. 11:14:41 Good morning. I mean, That's really great presentation is very encouraging that you are developing this collaboration with somehow. 11:14:51 And, of course, based on previous experience that you at CA, certainly, in some fall have with developing the chip. And it's more comments that I'm providing in the, in terms of them in the timeline is think it's within the guidelines that are the expectations 11:15:11 that we have before of four plus or minus one year for development that's good. In terms of funding in this is not official of course but in talking to an orca. 11:15:25 I think there is some intend to provide some funding through the r&d and in the design phases of the project. So I think it's something to, to discuss in the, in the near future. 11:15:42 One last comment I have is regarding the specifications. 11:15:47 We are currently developing or gathering the information from the various detector groups and I know you have done that also in the past. 11:15:55 But my plan is to have summary table for both for three types the reference detector that's the IPC PC. That's what's on on the books, so to speak. In the for the VIP seats and the IPA, and we should have a good handle on that by the end of June. 11:16:19 So I think that's in time with your expectations as well. 11:16:24 Okay. But overall, very nice. Thank you. 11:16:28 Thank you. 11:16:32 Any other questions. 11:16:35 Just wanted to say as I think you've found before for material we of course we are really interesting interested to to get information from volatile groups to you we already discussed that and yes if you have more information of what Jesus expected visibility 11:16:56 towards here of course we were starting to be very interested. 11:17:01 And so, for the following year, I think we will complete you. 11:17:08 It's an important point, it will be the absolute force. 11:17:12 Of course. 11:17:13 Thank you. 11:17:14 Yeah, it certainly will do. Thank you. 11:17:18 I mean, of course, Damien and I we have had other other meetings where I have seen up some of the slides and we had already left the discussion there. 11:17:28 So just I think the one thing that we haven't discussed at some at some point you mentioned here on your slides. The 32 where the 64 or more channels, something like this right. 11:17:39 So it was, I mean like if you think of the sample right now, and it's Rita cowers who is the expert for us. 11:17:46 In, in my view, it's already getting a little bit tight to put in our case, like, Hi chips on, so I mean like you know whatever higher channel density, you can put on them on a road trip without running afoul of, you know, like signal tracing and, you 11:18:04 know, like how many bumps can have and be like, whatnot. 11:18:07 So, would be nice but what do you think is actually the maximum that you can actually cram in a in a in a in a chip. I mean would even be having 28 be doable or is getting a little bit too too much. 11:18:22 When all 28 police too much. 11:18:42 I mean, everything is possible of course but if we want to keep the same little cheap or small, one on 28, foot view of the to 11:18:42 see that McCool's, who is really for years, more to say, please don't hesitate but. 11:18:49 Now the idea is to have quite small cheap. 11:18:52 So small dice always prefer to have more more cheaper 11:19:02 model size and the lower consumption. 11:19:05 And so, which are the more easy to understand them. 11:19:09 Okay, so I I see a hand for Marco. 11:19:15 Yeah, okay. Just going to repeat with me say that. 11:19:44 The world is doing. 11:19:47 So the indicator would be like it is important is the aim of the big plantation, is to start the discussion with the both the technical sir, in the front end, expert to agree about what is needed in the room. 11:20:28 Which part can easily easier. Be abductor to ever best full sheet and, 11:20:25 as you say, a 11:20:28 marketing nothing to the question of the number of channels, he pulled them from 10 over a different thing the board should really, it should be checking if less China is slightly smaller, cheaper, is really better than 11:20:54 a bigger a cheaper and more channels so wherever you live a better density there you. 11:21:06 Even if you lose a little bit. In, in other parameter parameter parameters, including some momentum in the healed, over of the chip. 11:21:18 Okay. But aka believing in this moment, the field itself may be not the biggest problem. Okay, unless it's really horrible, because we never we never reach the balance 11:21:44 indicator mostly close to the cheapest level of development and the mask sector is not really to produce a few with more 11:21:55 here to be sure to go down from 82 it to 60 the optimize all the parameters can be can still be interesting in the acceptable. 11:22:10 Okay. 11:22:12 All right, I hear you. 11:22:14 Okay, I like the next second bullet that we see on the screen. 11:22:18 You know 52. 11:22:21 I think we're interested in going lower so you can actually have festival of times. 11:22:26 But. 11:22:27 Okay, so, so that I don't have any further comments I don't see any more hands up here. 11:22:34 So thank you again for this nice overview and the roadmap. 11:22:42 Okay. 11:22:54 So now, I would suggest that we take a minute, yes and then like everybody please turn the cameras on 11:22:54 to try to smile as much as 11:22:59 I will try So Michael, if you can take your hand down then I mean it's not that you can be there with your hand on the screen but 11:23:13 anybody else. 11:23:15 So I'd see Jin Jin Jin Jin when you're in here twice Are you have you found anything No. 11:23:25 Okay. 11:23:31 See you there you are you are you obviously at twice, you know like, that's why. 11:23:42 Damien Ethan. So yeah, don't ever give me the old one. 11:23:47 Photoshop doing later. Okay, ma'am. 11:23:51 Okay, so Okay. 11:23:54 All right, let me just start here before people are losing interest. 11:24:01 So okay so what I'll do now I need to smile. 11:24:06 Okay, let's try this again 11:24:16 once, going twice. 11:24:21 Alright, and then I just for security and let's make one that go to the clipboard. 11:24:31 And one more so I never be as good as to be another photographer Sweeney, as always, feel about you know okay now everybody has to use a camera was those eyes and, can you please come up a little bit. 11:24:43 Okay. 11:24:44 Very nice a shadow sound. 11:24:47 Yep. Very good. Okay. 11:24:51 So that concludes our session here, we're going to reconvene at two. 11:24:57 And so I'm actually let's let's reconvene 15 minutes early. 13:43:09 Hi Jaan I restarted recording and closed captioning. 13:43:15 Hey, 13:43:19 stop sharing my screen. 13:43:37 can even start my video so you see me. 13:43:43 My dandruff. 13:43:50 Top Have you seen the report, rather. 13:43:57 Oh, sorry. What reports you read the RG reports, their review report review of much of God in 23. 13:44:08 Oh yeah, yeah, that was a long time ago. 13:44:11 Yeah, I mean, it's the first meeting after that. So I put it on slides so you can discuss it. That's why I want to start over. 13:44:19 Okay, yeah. 13:44:27 I'm Martin sent me the picture, the group photograph. 13:44:33 Oh, that's great. 13:44:34 I'm wondering how to put it on to the Indigo site I've uploaded it. 13:44:41 But now what do I do with it. 13:45:13 Did you want to actually display it on the, on the first page or do you want to just put it up as a file that some people can click on the Download look at. 13:45:25 Put it on the first page. 13:45:27 So you just got to go in and 13:45:31 I think you can just go in and edit that page, and add it does I just sent you a in the chat I sent you a link, what we did with such a photo or a few minutes etc photo I don't know. 13:45:48 Yeah, that's what it is. 13:46:00 I guess you can just put it in the intro text right. Yeah. 13:46:17 Okay. Okay. 13:46:07 That's that's how they look looks like. 13:46:11 I'm waiting for it to load now. 13:46:14 Hang on. 13:46:21 Yeah. 13:46:25 I can do that, I think, I think we'll see 13:46:33 the description has an image button so you can just add an image to the description and management appear there. 13:46:53 So obviously we have restarted know I guess I'm not sure if everybody's back. 13:47:04 Because in shed view I cannot get a normal participant list. 13:47:11 So we've got 17 people online. So I think we've had more before right yeah we were more. 13:47:18 Let's read a couple of minutes. Yeah. 13:47:22 Yeah, there's over 80 people have registered. 13:47:29 But I've never, and I've seen 38 online at one time. 13:47:37 And this morning it was a little bit lower was around 30 I think I guess we also interfered with the physics physics Goes to Washington thing right. Yeah, yeah. 13:47:54 Physics Goes to Washington to the left wasn't wasn't there yesterday. 13:47:59 The other, but it will. 13:48:01 Unless there's something bigger than the nuclear physics Goes to Washington, yeah nuclear physics. 13:48:10 But that doesn't really do two days ago it happened already. Today's Yeah. 13:48:16 Really do two days ago it happened already. Today's Yeah. 13:48:26 Yeah john 13:48:26 john 13:48:29 went to it. 13:48:31 He had to leave our meeting after his presentation. 13:48:42 Have a book this meetings. 13:48:46 know it's getting better, I think. 13:48:57 Teaching ends that. 13:48:56 So maybe I should stop right. 13:48:59 No, we're, let's see we're update kitten now. 13:49:04 I guess the rest comes back at two right. 13:49:18 See that is 135. 13:49:18 I guess I can read a couple of more minutes. I don't think we need that long for that and 13:49:25 just want to get it done before both comes waste of time. Yeah. 13:49:36 Where did you say the image button was. 13:49:39 So if you if you edit the overview. Right. 13:49:45 There is a button, which isn't image. So, so RL in the boldface talent. 13:49:55 Yeah, the top row in the first row there is like the flack for that there's an image. Yeah, I see it. Okay, I got it. 13:50:06 Okay, we'll stick it in there. 13:54:23 We are now 2019 it just start. 13:54:25 Yeah, yeah, maybe I just stopped So, welcome back to the last session. 13:54:31 And contrary to the other sessions we will not have many talks able to give a little bit of a review and answers I got from Thomas about the future and then I think it will be more question and answer session is eBay and the bottles. 13:54:45 And we should discuss the next steps. I think there's a face position now in the sea project, and how everything goes. What we do there. 13:54:56 So the first thing I wanted to show you if you haven't seen it, what has been written in the report for our report on the 23. 13:55:05 It was three paragraphs, and I have to say I don't know either I did a really bad drop in my talk. 13:55:12 That was, they didn't listen, I don't know. 13:55:16 I was actually asked about this in the talk and they explained it and still ended up wrong in the findings. 13:55:24 You see it here and I highlighted this right for instance the presentation mentions that as Phoenix is doing screening tests at 18 terabytes per second so terabytes per second, setting that much data would cost order 1 billion in this panel. 13:55:40 And I think if it's on US president operations budget. 13:55:42 us president operating budget. I don't know how you can write this without, I mean, obviously we are not going to save this. Right. And in the talk in our reports for the last, I don't know, many years. 13:55:55 You always say our prediction is that we will abuse at a rate smaller than is Phoenix, which is hundred 30 gigabits per second. 13:56:03 Right. 13:56:04 So, I don't know how this can end up in here, and also quite a change from from the last reports were. 13:56:14 They were concerned with other things, I don't know. 13:56:19 Yeah, different types of levels of triggers for just feature finance reconstruction may be applicable for different sub detectors. 13:56:28 Yeah, I don't know, I mean, I have limited time in the presentation I don't know what what they want. 13:56:35 Then there's a comment section. However, the proponents seem to be under the impression that the new ethic is for some sup detective would meet many years of development specification of understanding the requirements for some new ethic would indeed take 13:56:48 a good deal of time, but implementing the solution to those requirements should not take a long time if those requirements are technically feasible. 13:56:55 So I don't know what this should mean so first first you have to find this understanding and it says that it takes time so I think that argument is, it takes time. 13:57:03 And second, we had the same sentence in the yellow report and the comment I got there for the yellow in the relevant us. Oh, the five years you assume it's way too short, it took at least six for us, and it can take much longer. 13:57:20 I, yeah, I don't know, I mean, that's why I was asking these questions yesterday, I don't see how this can be done in a few years, or two years or so from from really inception, if you have to wait for one iteration for half a year and you have to find 13:57:40 five people who actually able to do this, I just that's not clear to me how this should happen that fast. 13:57:50 Yeah. And then our focus is indicating to me without him that the second design this may not be the correct focuses attention is not as the calibration points are to just move data but to filter that data down to size that could actually be start again. 13:58:03 I think the only estimate we have right now is that the streaming data would be about the same amount as the ticket data, if we actually take all the physics, so there's not much overhead. 13:58:15 And that's the price you have to pay, and it's saveable this out much filtering. 13:58:19 So I don't know where this comes from. Of course, the situation could be different and then we would need this. 13:58:25 But it's quite a change from the last report where all we are not sure if triggered, or streaming is the right thing, and you should make comparison of the cost. 13:58:36 I don't know, it's, I think it's moving goalposts, a little bit. 13:58:41 Alright recommendation. 13:58:43 Yeah, we should reduce. 13:58:46 We should concentrate on understanding how to reduce the size of the data stream, so that we could actually afford their certain stream, so it's it's this one theme right it's too expensive to save everything. 13:58:57 But combination of water triggers and so on, it's needed for that. 13:59:03 It seems likely that it will turn out to be optimal to use different strategies for different subtexts interesting data streams. 13:59:10 Yeah, I'm not nothing new, I think. 13:59:14 I have to say I'm also a little bit disappointed by that. I don't know, maybe we can discuss this. Now, before we go to the, to the other questions I had. 13:59:24 But do you think about that, I mean, 13:59:30 Last time right so. 13:59:34 Right. 13:59:34 It's over, mine. Yeah. Good, that's a good thing to say about it. Yeah. 13:59:41 Yeah. It was a little bit frustrating to me but I think for everybody who wrote the report, think it's a crest mischaracterization of what we did. 13:59:54 So, whenever. 13:59:58 The next question is, what do we do in the future. right. So, the program is ending. And in the south that we got any money out of it. But I think the redirect is really starting now and so far. 14:00:15 Most of our work was not ERC focused, but it's maybe a little bit different in the labs, but most of the work was a caveat doing an experiment like as Phoenix, or Jana class 12, wherever we want streaming readout. 14:00:34 And we can think about ESC in the same context, but the paperwork is actually for these experiments. 14:00:54 solutions for your seed. And I'm not sure how we can actually find that time and and get students involved and find money for the students to do this. 14:01:03 I think that's, that's a big problem, not only for streaming reader but generally for that, that proposal process, and the short timeline we have for that. 14:01:11 Right. 14:01:13 So I went ahead and invited our Bay, boys, and, and some of us to the session. 14:01:19 Thomas actually had no time. 14:01:22 But I sent questions to all of them and I have answered from Thomas and we can go through them. 14:01:28 And maybe, this can be the start of a discussion. So I don't know it's off as well. drunk and now you're off came in one minute before I was even more time. 14:01:45 Perfect. It's great that you're here. 14:01:47 So maybe we can go through the questions I've asked and then this might trigger off other questions from from the people here. 14:01:57 So the first answer I got a question question was, where do you see the readout truth, spot Do you see the reader Cooper's part of the detector proposal or something which is more at home is the hosting lab, right, or where do you see the transition should 14:02:13 be detected propose the technical operation Part Two should be hosting that right between that protector front end to analysis, and also how do you see the exclusivity of contributions as regards to that particular proposal, I ipv6 is pushing this a little 14:02:30 bit more than he, but I think you have to discuss this. 14:02:37 And that's the answer from Thomas. 14:02:40 So he sees it as part of the detector proposal. 14:02:46 And I think that that's probably likely to think about it. 14:02:50 But generally this is a new deal in model right we have this this project, and the project is already pretty active on the front, talking to companies and labs. 14:03:05 I actually don't have any details on that but might be actually be interesting to know the electronics will be funded out of a project and I computing that as part of the dhu, which is actually quite interesting if you talk about how dq computing and 14:03:19 others computing my fuse together right and we might ship out the data to HPC center on campus or even off campus so so you have to see how that plays out. 14:03:33 The moment that data gets transferred out of the country room the project funding ends so so how does this like this this this maybe a problem there if we actually don't have a conference room with a lot of computing storage computing reconstruction Macedonia 14:03:49 ustedes, and so on will have to come from the operation funding and will be in the hands of their respective departments that being a jailer of a gx facilities. 14:03:59 So, for the other questions suddenly not to a six of each type so much will be shared in terms of freedom electronics. I guess that's true at least if the detectors are similar. 14:04:09 Right. Without electronics would be similar. 14:04:13 Yeah, in the proposal collaborations I'm sure that will be slightly different that yes and how they do it so I'm assuming some level of divergence. 14:04:22 Now we have to see I mean the the group is maybe not that big. 14:04:27 So, will there really be two independent implementations of all of this have really come together and implement one way and maybe have slightly different hardware but maybe the same software framework or stuff like this. 14:04:46 Should I go to the next question or do you want to discuss this first of all if you have any comments or Avi. 14:04:54 Yeah I personally, I personally think we should discuss this because this this a lot of this is at the root of 14:05:03 all of our projects work and how how the collaborations will work. 14:05:07 And a lot of these things are intertwined, because it depends also, to what extent people both variations of streaming. 14:05:17 People fold in so this is intertwined all now to tell you some things. 14:05:22 It's a bit already misleading to say the project is already pretty active on the basic from top to accomplish and laps. So what's happening. 14:05:32 In the end, if we want to develop any new HXV it anywhere, the time scale is normally for an off year so at some level we need to know what's coming up. 14:05:42 And I think that's not just a product that's everybody, there have been more people involved in some of those meetings so I think that's just good practice in my view to see where we can take advantage of of companies on labs. 14:05:55 So I don't see that too much as project or not in the end the project or the project can do, is a piece of furniture piece of detector r&d. 14:06:03 And I've gotta go on from the users. On the other end, the timescale is such that it is part of our risk so we have to know what's coming up. 14:06:15 So, for the rest I kind of agree with something it's Thomas right, it is true that historically electronic electronics readout. And just like that, and only the director, counting whose house online computing is part of a project. 14:06:35 It is true that anything normally resembling smelling like scientific storage computing reconstruction is seen as operations finance, and that has an implication. 14:06:49 If you start any product in due. 14:06:51 You don't only need to see what your product costs. You also need to say if there are additional operational costs. 14:06:58 So, if, and I think that's a little bit I think that's the detector on the committee complained about. 14:07:05 In the end, if you make sure to models and assume your storage costs comes falling out of the sky, you're on the wrong track. 14:07:12 It's not going to work like that. So, all these pieces have to come in and whatever model you, you can have it. 14:07:19 So we need to know before and at some level, 14:07:24 and okay Dave is there and maybe cream is there too, isn't it, or others I don't know was all there. 14:07:28 We did an estimate of insurance streaming models about what it implies for computing storage tapes yada yada. 14:07:37 So I'm surprised actually, that was not people didn't use that, I mean it's there. 14:07:44 So, so this dish question shouldn't have come up because we know the answer already. 14:07:50 But the problem is that you cannot say a shift everything from online to storage because then you means you increase operational storage costs. Well that's going to backfire. 14:07:59 So, but there's actually more right if the data is bigger right, yeah, if it's more Yeah. And again, up to now what at least the documents with having David Burke Ray and whatever they Fernando know about it what we wrote, which apparently is not as well 14:08:17 known as others, it's not so bad for so that we know, on one a. 14:08:26 This is tricky isn't it any and you can get it to collaboration business I see it in Jefferson lap I four holes. 14:08:33 And they in principle can all go different, and we use that for different analyze programs. 14:08:38 So, if we are, I personally think we're going to go more into route to share and do the computing models the same. 14:08:48 I do think that the collaborations should think about I'm actually very happy that at least in one of the collaborations protocol embraces so there is emphasis already on computing, and and AI mean he's gonna he's very important and I think they are jobs 14:09:05 of the collaborations two together as needed a lapse in the end yeah the left we have to make sure that we have the facility available to analyze the data, and we know what's coming up. 14:09:14 And we know how we can do the distributed computing. 14:09:17 So, again, but I can't predestination that I already have trouble enough at Jefferson Lab, to have two holes, eating from the same. All right, let alone for. 14:09:27 I personally think we did well as global community in the yellow report, we have a lot of synergy we had a lot of people working together the last we want is that to diverge. 14:09:39 And in the end, we all supposed to go to the same door with each other. So, My personal cast for one a. 14:09:47 I personally think there should be one, and that should indeed not be to a six feet style but that's just a waste of our resources. 14:09:54 My view. 14:09:55 So I personally think if it were me on the product collaborations. 14:10:02 They should more globally, think about computing and read out electronics together, but that would just be me and I cannot predestined, the different detector proposals. 14:10:12 Yeah, okay, it is hell of a go. Yeah, absolutely. I think what what you said in this particular. 14:10:19 In fact, we should probably, identify those kind of things that will be very common to all the user proposals approach to proposals which are coming in and bring those groups together now and get them to start thinking together, instead of having only 14:10:36 one perfect proposal so I think I'm completely with you there. 14:10:54 So from project futures that people know, we've always felt that we need to give the user some help in certain areas, and and that is obviously things related to the madness, obviously things related to the integration and I think CPS of the, the different 14:11:01 needs of computing ice as part of that, we got to work together. 14:11:06 And I think an artist Dave is there I'm sure Fernando is there to a Christmas there as you Chris so everybody's there I'm sure people know and they don't need to be on purpose, he had their bodies they are green was there, she you know she broke I broke 14:11:18 God knows he broke I broke me colleagues to, I think in the end we got to work together here that that to prepare this. 14:11:25 We're going to an hour of distributed computing we're going to an hour of AI we're gonna do an error in my view of stream. 14:11:30 And so we got to make sure I think that everybody is aligned to this. 14:11:37 And so I do think it good, if the different collaborations have computing already directly in there, because we got to prepare. 14:11:44 And I think it's good if we worked with the house laboratories for contacts are coordinating whatever you call it, 14:11:53 and at least from our view, from project few have to now, let me say like that. 14:11:59 Up to now, the people we try to coordinate with for computing towards the error of distributed computing is cream hates hates was here now and Jerome Laurie was not here. 14:12:10 Now, as far as I can judge. So those are for now the two contacts we have for computing little apps. 14:12:17 So, this is Graham. 14:12:20 Yeah, I mean one of the previous what we're concerned concerns I have had through trying to organize this just for Jay lab with the four walls is even having the four holes in the same building in the same lab. 14:12:34 They the tendency to wander off and on tangents is very high. 14:12:41 And the problem with the CIC is, it's a very big project. 14:12:46 And we can't afford to split the resources and the other part is, we have some groups who are actually worrying about computing model and developing their own without even talking to drama myself about what the computing model is that we've already got 14:13:07 got the NLNJ Lab, and how we're thinking of merging those. And I think it would be far more productive for us to get together so next week we were kicking off at least trying to be trying to trace, who we should invite of these representatives from each 14:13:21 These representatives from each one of the proto collaboration was to talk about exactly what their needs are. 14:13:31 Yeah, I think it's it's that the timeline is a little bit unfortunate right. It's so long that people at the same time, under estimate and overestimate but can be done. 14:13:44 It sounds funny but i think that's that's true right Some people think okay we can really redefine all of this. 14:13:51 But a lot of that has a lot of momentum. 14:14:03 And a lot of tradition, and people who are trained in certain things and they cannot just get rid of that. I this is Chris I this is a great discussion and, I think, not to push things off in the computing but but you all know that the technology for 14:14:12 AI and chips and things will just continue to evolve it's, it's the hardest thing as electrical engineer right once you find a solution, and you pick that production ship it guess what you're obsolete. 14:14:26 Yeah. And you know we owe you know I've heard comments you know you guys are still using VM me well we're we are but it's not, we you know we picked a choice that had some vision. 14:14:36 You know it'll take us another 15 years, you know, but the chips are obsolete. So my point is that yeah I mean, you got Fernando in there you got Dave you got me there has to be a grouping so that you know I don't want to say, you know, it's not a, you 14:14:53 know, we're not going to be the governors of this thing but kind of our because you definitely don't want to duplicate efforts, Ralph you know we would have never made for halls. 14:15:03 If we had we had duplicate duplicate IP proposals for for read out now we've got, you know, almost for halls I mean holiday was it really participating but they are now. 14:15:13 And you know solids around the corner so yeah just don't duplicate any effort, it's just not going to work. I had a sorry to take some time here but this new do we model what it what is it exactly that mean does that do they not want computing in the 14:15:28 It's always true that indeed for projects, the mindset of an instant they just bought us Office of Science. Is that about fr smells like scientific is not part of the project and this is what that comes from. 14:15:41 This is not add on about it this is Thomas replying is it, this is nothing new, crispy we're used to this. 14:15:51 But there's nothing new here from my point of view it is this has been with us for 1015 years so 20 years, but it makes sense, right, wrong because if you if you wait five even not close to 10 years you have to install it you have to purchase it. 14:16:04 But the computing is just it's going to we all know it's going to leap frog that. 14:16:11 But to reuse a phrase that there's the long and the short of it. Right. the long is that it is true. The, probably, whatever we deploy in 10 years time to do the data analysis reconstruction that the AI the everything else for the ice is going to be completely 14:16:26 different from what we have now. We short of it is we have protocol aberrations who are coming wrong, who want to simulate the detectors and do data processing and do all that good stuff right now. 14:16:39 So we kind of have two parallel efforts. We've got the short range, which is providing the things that people need right now to be able to do the planning. 14:16:48 And then there's the long range which is thinking about, well, what's this thing finally going to look like. 14:16:53 Yeah, I mean I wouldn't even after death is not the Indiana look, we know that some of the developments for electronics takes five years. 14:17:01 And people say well there's so much pressure now we go so fast now we still have 10 years, and I john started going there. It's not like that. 14:17:08 If you in the end want to make your readout streaming whatever compatible with AI, to give you one example, they'll they'll be changed over the next year. 14:17:17 Oh yes, definitely. 14:17:19 If we don't follow the now. 14:17:21 We're out of the woods we're essentially won't be able to use it so you have always the short and long term schedule on the other end. Same as me the interaction reach and same with anything else, if we don't think about it now. 14:17:32 You might really yourself out. 14:17:34 So be careful as they're there in the end, there is a short time, which directly impacts a long time yeah we can assume we still have 10 years 15 years, but a lot of things we have to do now. 14:17:54 Yeah, the long term must be an evolution of the short term, he cannot be a complete break of the technology or anything yeah i'm sure Fernando knows much better than me so he can say. 14:17:59 I actually have four more specific question regarding moving forward in terms of without implementation. 14:18:11 And it's just silly came up this morning. 14:18:19 And my question is basically and they seem to question off a lot of people developing disease. So let's say a group has an idea or as a plan to develop amazing, for instance, With regards to funding. 14:18:30 Once, of course we agree that reasonable approach is these coming is funding possible through the r&d and design phases of the project. 14:18:44 And what processes, do they have to go through to get accepted. 14:18:50 I think actually I have a similar question in the next couple of slides. So maybe we can go to that and then discuss this. Okay. 14:18:59 So maybe at BB, go on. 14:19:15 Did we help with that question or not and yeah, there's no good answer for I think that was, that was a very good answer. I think that at least it's very congruent with my views. I think as long as you communicate with the other effort I think that's 14:19:19 the right thing to do. 14:19:21 As an old emphasize, I think. And I think everyone else has said that having Battle of effort. When the community is such you know evolving in such direction it's just not. 14:19:33 It is counterproductive. So I think it says going together so just just work together I think and then keep the project we're particularly dolphin the LP informed, basically. 14:19:44 And then there's a there's a bit of a problem right it's one of the collaborations, don't want to work, let you work on the other proposal to write, but but but no no that's not the way it is going to a different church shouldn't be the way at all. 14:19:58 I think there are specific requirements for example that are there, they should be considered as a common task for everyone who wants to work at the SEC like the polar imagery, like the luminosity measurement like maybe this is there's some fundamental 14:20:10 things that will be used for everyone. 14:20:13 And here is one opportunity as well on one in one one view or Avenue, which also needs some common thinking, so I don't think there will be a problem with this at all. 14:20:24 In principle, there shouldn't be a problem at all anywhere but in this particular place, it just makes most sense to have separate. So I think we just reach up and I personally would go beyond that, I think the problem people have a little bit. 14:20:37 Some of the male solid and nonsense going on, it's not just everything is going to be intertwined now. Yeah, so whatever you just said for parliamentarians in computing in my mind is the same as for tracking is the same as for Part of it is the same as 14:21:00 But, in my mind, you cannot separate the detector from the electronics from the computing from the imagery anymore. So I personally. 14:21:07 Hear what you say, but I think it's, it's, you cannot isolate those things anymore people everywhere. I said I mean I'll proponent a strong proponent of a very concrete you know you are everywhere, but we'll see okay will influence that don't work. 14:21:22 Yeah, exactly. Yeah, so my second question was if to detectors will be both. Do you expect to use the same dq and software infrastructure, or do you expect this to be two different groups and Thomas answer, expect two different groups but again I assume 14:21:36 much of the direct tonics might be shared I think we all agree that this is probably what's going to happen. 14:21:55 This was news to me I had the impression that at least the end user group was pushing for to the attackers at the same time. Yes or no. So I don't agree with Thomas roti right here. 14:21:59 So, tell me first about the timing comes from that people understand that. And that you can see in presentation earlier, it is true that there is one detector in the project, show it is true that we need one detector fully assembled ready to go and to 14:22:12 be at CD for a for the project. 14:22:17 It is also true. There is no second detection of the project. 14:22:19 And so if you want to be at thought officials operations that CD for that is two to three years later. 14:22:25 So the timescale is different. 14:22:28 Yeah, so, so that's new, you should know that that has been portrayed before. It just has to do that project wise you need bond detector ready for commissioning that CD for a and the other one. 14:22:39 Don't expect much fight science going on between CD for a and CD for the other one. If you want the same time to start physics CD for now just give you two plus years more on the other end very don't agree. 14:22:53 The two plus years is not enough. In my view, to really affect the solution chosen for exactly the topic we're talking about now. That's too short timescale in life, nothing is going to change in two years that dramatically. 14:23:03 That you certainly might change your solutions I don't believe that you might be another one which Exactly, yeah. 14:23:12 But I don't think the channel philosophy, should be different between the two. On the other end. 14:23:18 I expect two different groups at summer so far as religion so I can't people to believe in the same religion, but I can tell you from Jayla point of view, we do whatever we can to make sure people use similar software, 14:23:33 do the good thing now. 14:23:35 Find the right solution and then stick with it. Right, yeah. 14:23:38 Anyway man yeah Dave Lawrence or others can tell about that yeah the software is all over the place. 14:23:44 I don't think we've on that. Let's try to do that together and do a global. 14:23:49 My view. Okay. Third question, what funding opportunities do you expect so that's that's comes back to the question from Fernando, how can we finance the work for students universities. 14:24:00 I think it's somewhat clear for the labs, with the program continually change shape be discontinued. 14:24:06 As I said above this is funded out of the project and there is a will be a control account manager for this anti suddenly targeted. 14:24:15 So, it's my understanding that these are the money, also in the project men, such as construction is that the set correct. So any project comes with our defense and it has to do more or less, you have to mitigate the critical risks for the SD we call 14:24:33 set their acquisition projects, they see, they see a project as you buy a house. 14:24:39 So they call it an acquisition project. 14:24:41 So there is always r&d. The problem is, it's, it's correlated directly with the risks, be it costs scheduler technical within the project. 14:24:52 And it's not what we normally do for r&d or scientists, we try to always optimize we Tinker we make it better. 14:25:00 That's not often how do we project sees r&d. They see it as affecting course schedule and technical risk. So we have our difference in the project, but it cannot be used for generic r&d, which is a bit of a problem. 14:25:14 Yeah. Now the first piece of the question about funding opportunity expect that's a, that's not a project issue or whatever, that I can't help it out, but there is an RD within the project, and a piece of the at least the risk we identified have to do 14:25:27 with a CX, which is partly why we push for and all others to see more or less what's coming up for a six because he was a long term, long term timescale. 14:25:37 So there are chances for r&d, for a six developed 14:25:45 for things like streaming it's much more tricky because they see it as generic. So, now we are trying to also establish the continuation of the generic on the program in my mind and I'm sure our Bay is the same as all of you are the same. 14:26:01 Look that that r&d program, which we have had for the last 10 years. What did it cost. 14:26:08 And maybe a million, or $1. million. 14:26:30 If you see how much actually bang for the buck you got. How many people are basically did how many people actually added to this by LDRD by foreign France by universities doing it from their own dime. 14:26:22 My students working on things. To me it's a no brainer that you always need to generic on the program, especially for New Year's Eve of things develop. 14:26:31 But we're not there yet we want to get it moving. 14:26:35 And, in my view and look for the streaming and we know that there's nothing in reality, What do we get out of these 20 $30,000 a year. 14:26:43 This was mainly just that we are more or less got some blessing and could keep things moving. 14:26:49 So, I personally think we'll find a way. 14:26:53 Anyway, that's my feel. I bet you can add your other project man I don't. 14:27:00 I hate it always, I just do design projects but it makes sense. That's all I can say. 14:27:08 I see actually mapping has his hand up. 14:27:11 I mean just related to what was just, it was just saying right i mean. 14:27:15 So now, for I mean like in on one was on Wednesday in my talk right I explained how the, what might be considered generic D ID, se r&d by all the stuff with the TPC readouts is exact you know like I'm papers came out of this was in the manner of speaking 14:27:35 riding on the coattails of Phoenix Right. I mean, we went to Fermilab for the test in there. And then we opportunistically piggyback on some other measurement right i mean this lead tongue state color remittance thing that Craig, what he did and you know 14:27:53 like an all our stuff. I don't really see a place for this in the current universe right i mean i so right now. This is see in, like, what, three weeks we are going for one final, he are the six in this case from that trip to Fermilab but then this is 14:28:11 going to dry up and I don't actually see how this can continue right i mean like this, I mean so I for example wouldn't know where to go to and ask for funds for such a thing and I don't, I don't know if there's any guidance, where we would how we would 14:28:26 actually go about such a thing. 14:28:29 Yeah, I would have the same question or, I mean, that let's say I have concrete projects or concrete plan for four years. See what I would like students to to develop right or postdoc, how can I get funds at this new university by to pay them. 14:28:48 Right. 14:28:49 Should that be just the general do he is F grant, like usual, or is there a special or will there be a special program for EIC related research or stuff like this. 14:29:05 Many users did get their funds for the strings true direct color crash, don't get it wrong is that now our plan is, and I can't tell you that yet, but it's very clear that at the moment we're still doing essentially the r&d, which was started earlier, 14:29:21 you're still in there by 21, people should have their funds for their every 21 r&d from the project. 14:29:27 And I think the plan is more or less, to let people know in the June, July, timeframe, if we can find any way to continue this program yes or no, I can't tell you that yet. 14:29:43 In the end, the nuclear physics budget constraint, like crazy. I'm sure Martin will know what the issue is with the phones at Brookhaven why this program is difficult to continue it, even though they might be willing to. 14:29:51 They are facing a lot of redirect issues too. 14:29:55 They are facing a lot of redirect issues to, I think, overall, the laboratories can help Normally I'm sure that be at Brookhaven Jamie Dunlop or read at Jefferson let me if you're trying to help projects where we can. 14:30:06 We cannot easily pay for students, for instance, that's a different color of money from our fuel so that's always the problem. 14:30:11 Like streaming software I can guarantee you that Jayla has been helping color imagery, I can guarantee you, champs. 14:30:19 But we normally can only do things if it's insane achieving Hong Kong things like at Brookhaven whether as feeding string like a jailer, we have to also interested. 14:30:30 It's more difficult for us to for the national laboratories to help these students, left and right and that just has to do with colors of money, I mean there's nothing I can do there. 14:30:39 But we can normally try to, if there's r&d of general interest for everybody. We can help. 14:30:47 And again we're trying to more or less get to a more generic on the program we know we need to have knowledge to the users in the June July timeframe otherwise. 14:30:59 Everyone just stops October 1 we know that we can tell you yet. And the hiccup is really look a million dollars in a moment in the office of nuclear physics for next year is is a big thing. 14:31:06 And then they start arguing about okay redirect from the laboratories well I can guarantee you Brookhaven is already redirecting and Jayla pest. the whole lot of six weeks of operations friends this year. 14:31:18 Where does it come from. 14:31:19 So that's the problem they're having this year, so I can't we can't tell you yes, I mean that's the short of idiot, we're working on it. 14:31:26 I personally think many of us believe we should have this. 14:31:30 I do think that the laboratories in my view, and okay Martin can comment or not, we can still always help at some level, but there are certain things we cannot. 14:31:39 Yeah, I think the more important thing was, what, what you said earlier right i mean i can just see the collectiveness of, you know, like what what might be considered r&d for ipv6 and IPAs. 14:31:54 So I mean like we have to just make sure that there is no 14:31:59 prior, I mean, and you prioritizing right i mean like so I mean like you know we already saw this exclusive exclusivity clause that you know that was tried to be imposed, you know like, okay, you cannot possibly support any of the other collaborations 14:32:15 or consortia if you are, if you're doing this right and this might just have a direct input on what we're talking about here, right so there is it okay for money when you know like is on. 14:32:27 It is not obviously tied to a particular. Yeah, effort right so I just want to make sure that everybody is aware of it and then there is some. I don't know if it in the end time is probably going to do this but if there is some kind of appropriations 14:32:41 board or something that says in a neutral way okay that's useful that's not useful. So in the end right into the way we're trying to do it, if we are successful, is to use the detector Advisory Committee same way as the detector on the panel. 14:32:54 there will be new to body, but the gerontology writers now the problem is, I think, I'm sure that the one calibration didn't think this through, because at the moment you have the experience was over the course, what does that mean we gave channel firms 14:33:09 to shorten r&d What does this mean you don't pass the information on or what does that mean if you come at my door. For more or less some additional help, I can't help you. 14:33:17 So it is a bit a problem I understand what you're saying. 14:33:21 Also I think it will not happen, Martin Don't, don't, don't, that that will that will that that clause was, it'll, it'll, 14:33:33 it'll defined in the statements that were defined in the misunderstood in the inside and outside, I think that that's that's that they had a different thing in mind that they did not think about this particular case, for example, so there are there are 14:33:46 now evolutions of that that clear they're trying to clarify it. I think it would work. So don't get hung up on that one anymore. I think this for this particular thing for sure not. 14:33:59 and I'm hoping the other ones too. 14:34:01 But, but let's turn the clock a little bit back to change a little bit so you guys you said your actual first letter, and I would have loved to be there more like I like this topic I think these are every gotta go. 14:34:12 What's the biggest hiccup, what is what is hindering progress, okay you can point to a student that's not that doesn't sound, if I'm offending as you don't okay that doesn't sound very high level priority. 14:34:22 And I'm all in favor of students don't get me wrong but. 14:34:26 So what's the biggest hiccup of the bombs. 14:34:35 Tell me, tell me the other way around. 14:34:32 Oh, there's so many different ideas so many different approach it's you know it's not converging. 14:34:41 I don't know if that's actually the question that you had, but each lab is doing their own thing. 14:34:49 Um, that was kind of just, I mean that that doesn't. I mean it catches the guests but I mean, I think what what we are seeing right now, in, in what we are talking about right is pretty much tied to a particular already existing pyramid right okay obviously 14:35:08 I spoke about us Phoenix, your host this morning, you know like, about a dz experiment and and you know like to Jennifer I mean, David Abbott was talking about what's done for the whole ABC D experiments right. 14:35:25 And, you know, like, and, and it's good, I mean I get all this stuff that will come out of this that will be using for will be usable for the gic, but every single group. 14:35:35 My wife has to continue supporting what they are supporting right now they cannot just stop and I cannot devote half my time on doing something else like it has to be kind of tied in with what we're already doing and this is pretty much what a lot of 14:35:50 what we heard a lot of thoughts of CES workshop here. So I'm not sure if there's a way of carving out like pick a number, 20 20% of the time of the usual suspects here to not to this and you know like worry about what what will be needed in a manner of 14:36:08 speaking we already asked to do this for the proposals But should you in December right so we, you know, this time has to come from somewhere. 14:36:20 Yeah, I'm not even sure if 20% is enough right. 14:36:24 Well, I can put put in any any Mazel Saturdays and Sundays. So my, my standard answer is always oh I just cut my coffee break on Sunday afternoon short. 14:36:35 Yeah. 14:36:36 And I mean, if you look at faculty right, I have to get money in, otherwise I will not get tenure, right, so I have to get a grant. 14:36:59 Well I've spent my time on bread on on on projects where I can get a fundamental grant rates, it's worthwhile to do it right. 14:36:55 And again, but we're not solving death here is there, even a continuation of the r&d broken that's not going to solve that. Exactly. I think in the end that, what did we get we say we because I was part of the initial group gaming in malls with his proposal. 14:37:09 In the end, all we got was 20 $30,000 suit that we could have our annual workshops, sister, we could organize and yes we have to, it's common sense we should utilize the experts we have them before use them, isn't it i mean if we all work together, we 14:37:28 don't want to separate, we want to want to die for each other. So I think what we got to find out is a way that we have enough funds to continue those these workshops that we can coordinate and be smart. 14:37:38 It's good if say Sava can be used and tested as a Phoenix, it's good if someone can be tested tells me. 14:37:44 What's wrong with me. I cannot solve your student problem young. I mean, and that's not the topic. 14:37:50 Even if the r&d program would go on that might not solve that either. No, no. Sure, if the question is doesn't go whether that the program can solve that issue but I think it's a fundamental issue. 14:38:03 The I see faces, not only on streaming without better but but look for the students and postdocs, come on. We have our Normal University Grants, which are associated with various experiments. 14:38:16 You're asking for student support from the OE directly. 14:38:20 And you are our training those students in the data acquisition part. In this case, the student will have to spend some time they're 14:38:30 Currently there is no research grant for AI see that is a correct statement. There is the Rick heavy iron and spin and then there is the spin, or the era of general media managing right what you all the students in nuclear science are coming in these 14:38:47 fields are coming from these two grants. 14:38:50 Three. So, that is where we need to go. 14:38:55 So let me also be very blunt here. There's Fernando, Fernando should know the answer already but maybe he wanted me to say this. 14:39:02 If there is a direct route should need at the moment for some basic. 14:39:07 And we really need that in our ambitions to we're streaming, and we know what we need. Yes, our fruits. 14:39:20 But that's the thing we know i mean okay so I was just thinking of a concrete example right. Let's say that David evidence routine we would want to team up, and I mean just to pick one novel right the company and how why they're making this chip and we 14:39:33 want to just get such thing as spin and not a little bit, and they know a little bit about it because it happened in both the CSP program and all the stuff they said oh that looks like a promising See, I don't see I don't actually can think of a venue 14:39:50 And then, $9,000 or something like this, to just get a prototype, and, you know, like and then also the stuff, let's call it about 15 grand or something like this to say in a year from now to say that looks promising is not going to be the last iteration 14:40:07 and all the stuff, but this is the thing that we would probably get behind something like this. 14:40:17 Martin, if it's not 10 $15,000 and you are in direct contact with David and Fernando, and I am not sure why they are not being on my door because to me that this is not you normally do, as laboratories. 14:40:26 Well, I didn't I don't know I've actually Brookhaven but I think David a phenomena notice and Chris also is there for Graham whoever is that if we really believe this is something we really need 14:40:38 to spend the money. Yeah. 14:40:45 This was an example I didn't actually talk. But this is partly by law and order earlier s project. Because if these are possibilities opportunities like that. I think we want to do it. 14:40:54 I think we want to do it. Okay, well, so I want to make a couple of comments with respect to Martin's statements earlier. 14:41:03 One of the ways that I feel like we, we can come together much more easily. You know, at as the two labs, as one, if, if we have the ability to be able to share some of our technology, there's customer electronics that's being developed and used at both 14:41:25 labs. And one of the great things, you know, to get people involved with something is to be able to have access to that electronics right, um, you know I want to have a Felix card and start using it here at j lab and and get comfortable with it and, and, 14:41:43 and see the types of things that we can do with it. 14:41:47 And, but we need the jumpstart, and we need the ability to get a card or if we got a group collaboration that wants to work on that. We need to be able to get these, these types of electronics to the groups that could potentially use them, and and bring 14:42:23 feedback to it and get get both last more comfortable with using other the others 14:42:15 devices and and and and what we can do with them and how we might be able to integrate them, then it'll be a lot easier, because, you know, then, then, you know, even though there might be firmware that's developed, you know, that's different, or software 14:42:31 that's developed this different if you're using the common set of hardware and, and this goes to commercial hardware to like some of the things that you could just buy commercially like now lower or a resto or, or any of these other companies that have 14:42:45 things. 14:42:49 Then, the more than we can interchange those things and use them. 14:42:55 Then I think it'll be easier to, you know, move forward together. 14:43:02 Tomorrow, which you I thought we had a fearless card from our colleagues. Okay. Yeah, we do know haven't been able to use it because of issues because sci fi to is tied to Atlas and I see get access to the firmware and and and other things, right. 14:43:16 So, this is the problem right so Felix card, and we, and it's too hard to. 14:43:23 It's too hard to get, get, working with, with it because we don't have the connections needed with with with people in various locations, Dave, I'd like to solve that problem. 14:43:35 Dave I think Joe and you know we could solve those are solvable issues I, you know, just a little it's taken little more time than I would have thought to just interject quickly because I you know I did the vendors this time around and it's interesting 14:43:49 to hear their feedback and we have the same experience right all of us because we leverage our industry partners for, you know, name it PCI I Tripoli TCP IP all these things are industry standards, and for detector development, and we did the same for 14:44:06 12 Jeff right people were very focused on the detector development and analyzing and, you know, just checking all the performance completely checking that detector design with it could have been me, it could have been. 14:44:22 We didn't care about high rates right your focus was on the detector. We knew that the electronics was already evolving, right, man. Maybe almost a giant, but a leap frog from vertex for to, you know, now vertex seven or ultra scale. 14:44:39 So what I'm saying is we're on a great track here we have a lot of shared knowledge. We can open up the channels so we can shoot you know literally just I don't want to ship this right or Martin you know here Here you go use it here's my documentation. 14:44:52 You give me yours I'll give you mine. 14:44:56 Or maybe we try to focus on, you know, Felix to something that is going to come out in a year to not five years not 10 years. And this gives everybody a launching point at the front end on their detector designs to use you know version one of Felix to 14:45:14 that's just again these are just words off the top my head. 14:45:17 But and then we could leverage industry partners because is PCI going to be around and you know is the risk to switch going to change the 400 gig we know those things we know are going to change. 14:45:28 So I think we could be talking about time and constraints with people's projects and, you know, multiple multi, multi task things on different things I think we could, we could leverage the the industry folks. 14:45:41 Yeah, now Sharon, Sharon I'm concerned I think there should be no objection to sharing. 14:45:46 And again from IQF say the community and that that doesn't only. That's my style but as Martin that's all this is near the people all combat the program or less and they want to get things moving and my people. 14:45:57 Okay, I only have friends for jail represented in reality, it might be able to say it is a good idea of course will try to do it. 14:46:04 And I'm sure. Like, I hope it's the same at Brookhaven I hope it's the same elsewhere. 14:46:11 In the end, this is Bobby Bobby had some meetings with the ASAP companies and just like if there's something useful for us we're all officer, we're all behind, then we should do it, we should think find ways how to do it I just, I always believe in the 14:46:25 idea, you gotta go for what you believe in, and the end will find the funds for it. 14:46:35 Just, just a little warning right i mean this all works great until the lab. 14:46:37 is a lunch break, that our Indiegogo basically crippled here at Brookhaven right because you know like exactly an issue like this right. 14:46:58 And so for example we made a conscious decision to have four s Phoenix the entire reconstruction software and whatnot. 14:47:07 Public in GitHub, right so everybody can just get at it so you know like it has a Public License and everything with it, we cannot do this for something but that's kind of teeny tiny fraction right i mean like so whenever every in from wherever so it 14:47:22 has a proprietary IP with it you can actually do this but most of the stuff is there right and I can already see the minerals. This is what you gotta do, should we have you know like I mean, there's always this friction between you know like the people 14:47:37 no no no this is my you know like you can actually just have this leak out and but this is really what science is about right in the end you just share the stuff you can use this stuff and don't reinvent the wheel every week we have to be forceful with 14:47:49 Oh, I agree with that. Yeah. 14:48:05 stuff that is coming and if you know like it extends to intellectual property that should really not be an issue between national labs, right i mean like you know it's. 14:48:06 Got to make that happen. 14:48:08 Oh yeah. 14:48:10 With the computing side we've just been going through a very similar issue. Right. One of the things that we would love to do is make our scientific computing open to to be nl users, and for being able to make the computing open two hours in the context 14:48:26 of V IC, and that that you know we start walking along that path and the cyber security folks for one ran and say, you know the deal he's got rules about foreign nationals and bogeyman and all kinds of other interesting things. 14:48:42 And, and so we spent quite a lot of time over the last 12 months, coordinating things like cyber security training removing things moving them into different sorts of trading so. 14:48:54 So now you can do the J lab cyber security without having to do the bit about this is will you hide if somebody comes in and tries to shoot you. So, I caught up with mountain I think what we need is at some level for these groups of the two labs to say 14:49:14 okay at work what we're collaborating on EIC at least between j live in Brookhaven we should reach a common ground on what these things look like. 14:49:20 It took not be limited to these labs right if you really want the universities to take power tool right. 14:49:27 It must be a way, it's low effort to get access to this stuff. Yeah. Yeah, I mean, access the access to is getting easier I must say that I've, I've seen over time and I think there's a significant effort, particularly related to etc and getting the. 14:49:43 But I think would be, I was surprised that your safety rules etc or security rules are still different and the two laps and that might be something that rolls and the higher ups in the management of war claps could probably try to solve together is being 14:49:59 being solved that means the issue was, it was not that they were different it was that neither side of seeing each other's training. 14:50:07 I see, so we had to do what Martin said is, you know we we show you are trying to use yours, yours and then then we agree that will be in training the same thing. 14:50:17 Okay. 14:50:19 Excuse us a lot I mean this this is not a specialty of, you know like, Gina p&l or something like that right i mean like so when I go to Fermilab, and I have to take these Supercenter for training, even 90% is the same right i mean like how often I have 14:50:33 seen this active shooter video and had to read it acknowledge that I've seen this, but, you know, but it's, I have to go to Fermilab and look, watch it there for it for it to count but it's exactly the same thing. 14:50:47 So normally what you would expect this at 90% of the stuff that is common. You recognize the respective other training and only half the cited facilities specific stuff we covered but I think we're getting there. 14:51:01 I mean, you know, I mean Brian was right i mean that you know, it's going to take a while and, and in the end if there is a recognized mission need for this normally happens right i mean you know it may have some teething pains and stuff like this but, 14:51:15 you know, I think it's actually going to be okay i mean i you know stuff like radiation training right it's not not recognized by the respective otherwise why you have to just listen to what a neutron is once more, like when I go to some other math. 14:51:34 Okay, let me make it clear, if there's any issue that things more or less cannot be shared, at least from jail about let me know that I'm not aware of it. 14:51:42 And I'll shake the tree. 14:51:45 Okay should be able to share at least within the ESP project, come on. 14:51:50 Yeah, well I'm only a lowly deck manager I cannot actually promise the same thing but I mean actually mentioned, you know i mean like we might we might be able to work something out with at least the stuff that we have made it, I mean not Atlas but we 14:52:07 have made for the TPC Felix card that is the reason why this couldn't be shared but okay so that that's a particular issue, okay but but then again globalist let's think about that if you need to go for streaming, and we can't go in 15 different direction, 14:52:24 directions, but overall we have to have a clear plan where we Monaco and in the end we want to go is that we have something working for DSE. 14:52:31 And now and if there is direct spin off for experiments at Brookhaven Agila or elsewhere or the users and universities all fine by me. But let's see how we can get there. 14:52:40 Yeah, I think the direction of swimming without really depends a lot how the direction for the detect our goals, right. So, if you have different type of detectors if it has different types of streaming readout. 14:52:54 Just from the data amount and stuff like this one detect I might not need complicated online reconstruction to get the data right on the other one might need it. 14:53:04 Right. And that's a different design in the end, like the components could be the same, but the overall design will be quite different. 14:53:13 That's generally what we do with like for example class 12 glue x and the whole day and see is the component parts we kept the same but but how they have these slap them together to make a DAC system or whatever else, and make it compatible scalable, 14:53:29 that you can go that's, that's a lot of what Verdun was talking about in his presentation. Was it you know you decompose a complicated problem into a lot of small ones. 14:53:40 Yeah. And then over the years you change the little pieces. 14:53:44 And then in the end, you end up with a completely different Boat 14:53:52 Show. 14:53:52 All right, maybe we go to the next question. 14:53:56 So what points to use the critical about capabilities should be Sal community demonstrate focus on anti thanks crack distribution interface requirement definition was yet to be determined that the systems. 14:54:07 Yeah, I was, I was at HCC referee during the time that developed a student readout and saw electronic coordinators had to produce a ton of electronics, literally being confronted with ever changing requirements. 14:54:19 Yeah, that's on your own example is the sample for the TPC this all the digital guru to do SEO suppression rather turn out in the last minute that there was not enough channels on the summer to define a good baseline. 14:54:30 So just before everything was settled, they decided to send our data up from the pit to the accounting room, where the data from several samples had to be combined and only then could the database to suppress it was to say that this that this meant a 14:54:42 complete overhaul of everything we've asked the kinds of things that I think would be very crucial. 14:54:48 Think that actually goes back to the point we just had right if if you make it impossible and scalable. 14:54:54 We should be able to react to these changes. Right. 14:55:00 Yeah. But this is also I mean like, clearly, right, we heard all about that you know like basically the data. In this particular scenario right so now that they I mean they were counting on the, on the sample DSP to thin out the, the data stream. 14:55:14 And then the feelings would have been enough. 14:55:18 And after they saw that the study from channels isn't big enough coverage to do this right so they had to send them everything that they had to actually drop the sampling rate, right, that is sort of like a bigger thing than bringing up. 14:55:34 I mean, you know, sort of like moving a particular functionality from one place to the other, but I think they go to a much lower. 14:55:42 Sampling sampling rate basically in order to accommodate since but, you know, either way this is all prototyping the Calcio I mean you cannot really do this this out actually trying it but you can see the end of the base and you know like never find every 14:55:57 little pitfall in here so yeah I think we talked we touched on this one right so that you know like we get some, you know, early versions of a particular chip or something like this and, you know, bang on it and and see what happens. 14:56:14 Yeah, I need to leave for another meeting and I didn't want to do, you can continue and I'll catch up with you and Ralph later. All right. 14:56:29 Thanks for being here. 14:56:25 Thanks. 14:56:29 Yeah, I mean the goal of that question was really to see if if there's anything like critical from from the project side where they vary about it. And, I mean, from from the reviews, we always had the impression that it's all whether we should do it. 14:56:45 And then there's too much data right and I wanted to see if there's ever like concerns, we should address whether it's feasible at all, or, I don't know, can we actually get the signals out, but whatever you come up with and I want to see if there's anything 14:57:04 to worry about from and what they should concentrate on coming back to my thing right i mean that's the thing that gives me a little bit pause in this one here is that we have this little fishbowl called as Phoenix where we actually going to try all the 14:57:17 stuff. 14:57:20 We've got some physics, out of it. So I think this will actually be a pretty forceful thing i mean i you know at the moment, I don't see really any showstopper while why it wouldn't work I mean I have been reading it out I passed a couple of reviews this 14:57:35 stuff but you know yeah i think that's that's true for an S Phoenix type detector, right, coming back to my first point Yes, exactly. So, but, I mean, I'm not going to add anything new is not India and the thing with some things which are critical. 14:57:54 But people already talked about it. 14:57:56 One thing is for shorter time into clock, mainly. 14:58:10 And my people are my people. This means people at Brookhaven or Jayla are looking at my people. 14:58:15 So we know that, I do think also, we talked about earlier. 14:58:20 One asked to indeed be able to decompose the system, put it together, look and play always call it, and the other one. Yeah, the points are critical is what detectors. 14:58:30 I mean, in the end what we're doing here, we're making we're making the detector the electronics computing the intertwining those, that's what we're doing. 14:58:37 So this is all together so if there are implications for certain detectors for certain basics, we have to know these things that's critical. So everything has to be invasive, nothing can get out of line, my worry is that some of the detector people don't 14:58:51 see the Qs, so intertwined. 14:58:55 Well this is I think it's going to be extremely important that essentially we have this discussion among the various protocol collaborations. 14:59:06 So one comment I have is with respect to the current design, or that we sort of set out for the EIC detector is is that we have the front end electronics. 14:59:24 We haven't really defined To what degree that they're going to be aggregated, to the point where if you wanted to do something like this, if you needed to get, you know, enough of a detector. 14:59:38 You know, streams together to do something like for instance the, the machine learning that that Sergei forgot off talked about in the workshop. Yeah. 15:00:00 And an FPGA you could do some sort of processing ahead of time to reduce the data or to do something like zero suppression. 15:00:02 That has to be considered. because that's. 15:00:07 We have to help facilitate that if if we're not just going to bring you know, as many of the raw streams as we can to the counting house into Felix cards into the first stage of servers there. 15:00:20 If we need something in between for front end processing. 15:00:26 We need to to know that and we need to provide, you know, the hardware and and and be able to support that that that has a cost effect if it's going to have to be at the front end. 15:00:42 Yeah, just, I think the. You don't need to always bring the raw streams up to the common room. 15:00:59 You know you sample, you know, if it doesn't, if you can do it, and you can deal with the amount of data that's being generated then there's that that's easy right i mean then you don't worry about it. 15:01:01 The question is do we need to worry about it, do we need to do something. Before then, or not. 15:01:08 I always thought that there would be some level, where you can have a sample of the raw streams, so that you can test your algorithms that are doing, you know, the question is going into different direction but I think the question is, I mean, in the 15:01:24 end you want to save what comes from the experiment which is important, right. So, do you do co suppression like the temperature, if it's officially you copy only matters on my threshold, and do not copy the zeros, plus noise, or can you not do it. 15:01:42 For example, I could imagine the coloring media, where you you want all channels, even if they are not above threshold, because in the some of them best information. 15:01:54 Right. So you would need to say, Okay, if there was something in some channel, which was big enough. And then I read out everything else, which means that you have to have vision of more than your channel or five channels you have to have vision of a 15:02:10 large block of channels, to be able to decide whether to keep it in bed might be further up the food chain right that the stream. 15:02:19 And that might need a lot of computing power. 15:02:24 And a lot of benefits between that computing power and the front end to get that data over and then decide on the social pressure right this was this internet detect all over social pressure you want to talk about. 15:02:38 And I think this is absolutely correct. We have to and it's very dependent on what the detector is not the physics is and what the background is like to make that decision so that's not something we can be can look at it detector and say from far away 15:02:52 oh yeah that would be fine, right, it might not be fine. And we will need a lot of simulation and hop thinking to decide for each detector. 15:03:04 What the, where, when, the Syrah suppression will happen. 15:03:10 And how. 15:03:13 And hopefully most of the channels will be oh yeah it's below threshold. 15:03:18 Don't Don't say anything on on the front end trip. Right. But, some might not be like that. 15:03:35 Thanks for getting deep in the middle issues I mean you just summarize big discussion that we, we have and every year, can we be with me but, yeah. 15:03:38 Okay. Yeah, I'm asking must say, I hope, I mean really good thing I'm actually feeling good about something in particular what was said right so you know like okay, let's work together. 15:03:49 Let's get obstacles removed. 15:03:54 That's the other thing I might ask, What if, in that sense, i. 15:03:59 We had a lot of headwinds where people were complaining about the should to trigger and not streaming and all of us. 15:04:08 How much is that still true from the project side how many people are still not convinced of streaming readout as the right thing. 15:04:21 Project. Yeah, yeah, really the question you want to ask because your product is easy, the answer is not 15:04:30 addressing the project right how many people come to you and say Hey, why are we doing the stream of them can do it, you just gotta realize that there's a lot of people you're you're you're hitting that people have 30 years, worked with a certain system 15:04:43 You just gotta realize that there's a lot of people you're you're you're hitting that people have 30 years, worked with a certain system they know it works they know it's debunked. 15:04:46 So the proof is always on us to show that it can be done and I think that we can do better. My view is sometimes have make it easier for people to see the arguments we have that this is nothing new. 15:05:02 So, there are things we can do better, like this issue about storage I mean this is crazy. Why the hell was not clear. 15:05:10 And to give you grief or john your presentation to be had his answer already by one by one of the address this beforehand. So I think we can do better there as all of us. 15:05:18 This is not critical to you this is all of us. 15:05:21 We got to make it clear if there's a certain question race, normal we have the answer, have it documented, we write it down we pointed out. 15:05:29 I personally, the way there I still see the hiccup is some of the things with streaming, if you compare say with our competitors la GB. They always have the job site as a candle, we don't have that so we still have work to do. 15:05:46 So, I think that will be dots drone while they're gone. 15:05:49 But I have my impression. And you heard that a little bit when we started the yellow report, nobody spoke against streaming. 15:05:59 So I think it's changing, but you will still get always look people are always nervous they they tried to see goes 15:06:08 there, 30 years of experience how to do experiments. So, we just probably can do a better job more or less. 15:06:16 Yeah, try that was convinced ourselves let me answer there. The STP is a good example right people I'm very familiar with HTTP and they drive a very aggressive former streaming readout right where they were they really reduced to physics, and only right 15:06:33 the physics high level physical objects out. 15:06:37 And that's probably not where we end up. 15:07:00 Right. It all took So I love that pitch which is absolutely scary right and you need these candles and a very good understanding of your detector to make sure that this is going right. 15:06:50 Right. And I can understand that you're panicking, in that way that people are panicking if they see this, I would too. 15:06:57 And they try always to make the point that we are not at this end of streaming without via really have the other end, where we can keep everything, almost everything. 15:07:07 If we can probably keep more than in the district that result. So we will have less problems like that not more. 15:07:13 Right. 15:07:14 But maybe I'm not doing a good job at this. 15:07:30 But young remember I'm in northeast Kentucky right and we try to find anybody for the yellow report get anybody stand up and defend triggered. I mean if you read the report on the ID I think we would have found somebody in there. 15:07:46 No, but I don't actually think we know I don't really share your your your concern about the management in this particular regard. I think in the end I mean I. This is a little bit, you know like, below the pay grade I would say for many people right i mean i you know that okay yeah 15:07:54 i mean i you know there's that okay yeah guys make some readout make it so you guys are the experts, I think there's some notion like, like, like that so well at least we have set it here that says put how we would like it at least I think everyone that's 15:08:11 working on the electronics. It's a foregone conclusion. It's going to be a streaming system. 15:08:18 Yeah, I mean, the product but it goes the other way around, is that I'm sure that that Fernando de if they will go on tilt if it's not streaming, there's a cost will come up. 15:08:28 So, We better make it work. That's my field. 15:08:31 So, yes, Rocky walking. No, so cheap to to emphasize one of those points I think one of the useful things is to have the answers, but, and also for people to know who to communicate with to get those answers. 15:08:48 An example that I had from from from J live a couple of years ago was somebody came up to me and said, how fast your tape drives. 15:08:58 And they finally didn't go to either went to somebody else. And I heard about is to ask the guys why do you need to know how fast potatoes are what we're proposing the experiment is going to go on the floor in five years time, but we want to know whether 15:09:10 you're going to be able to store all the data. 15:09:13 I said, the question you should be asking is, Graham Can you store the data, which the answer is yes. 15:09:20 Not how fast are the tape drives. And I think there's a lot of this going on where, where people are asking for random piece of information and extrapolating when all the time we've already done the extrapolation that we know what the answer is. 15:09:34 Yeah, and the only question relevant in all the refuse we have last year, I came up was indeed I had to do with these of these two that they were worried about the impact on the storage on the computing. 15:09:45 And then we had to dig up our old document of six months ago reminded ourselves where it was and post it for the review committee so we should play devil's advocate ourselves and write it up and make clear where people can find the answers. 15:10:01 Then I think in a similar vein was my first question, do you see storage cost as a major issue. 15:10:08 In my opinion, this costs would appear in any case, at least from our predictions, the data rates are the same for ticket and streaming. 15:10:15 But it sets the incorporation point right you can reduce it. If you're more aggressive, and you can blow it up. If you are less aggressive in what you do in the serum suppression. 15:10:31 Same as thresholds and the trigger storage costs, it's an operational issue for Rick, it is quite a bit, but unless their appointments with the exhibition, so forgive and Rick and I was not ready. 15:10:37 So that was the stand, I was on and I tried to to convey in all the talks that it's less than as Phoenix manifesting is can pay for it. Yes, you should be able to pay for it too. 15:10:49 Right. 15:10:52 Yeah, and that's actually the argument, one of the arguments that you Romans used about the computing is that the storage and all the disk and everything else is about similar or less than S Phoenix. 15:11:04 And since you already have done s Phoenix, all he really needs a refresh. 15:11:10 Yeah, you don't need to redo the whole thing you just buy new computers and the new few new tape drives, but the infrastructure is all the same. 15:11:18 Yeah, the scale of the problem is the same. Right, yeah. that's 15:11:25 assuming you do is Phoenix. But yeah, I mean I think we at least have figured out that we could do as Phoenix data rates, but i think that's that's a validated by, maybe not really dated but an refereed issue. 15:11:42 And people were don't have a concern that this would run. 15:11:46 Right. 15:11:48 So again just just I mean I agree, this was written years they didn't know, you just kind of make sure that the, you know what, they know what storage gets course is going to be what the tapes is what it is we know everything. 15:11:57 And for us to think about, turn the clock a bit around again, people have to fold in always I'm a cloud half full kind of guy. 15:12:05 Five years ago when we started all these workshops we would have come to questions Why do you go to stream. And we had to argue that are certain channels otherwise we cannot even access in certain experiments. 15:12:16 I would bet you that, at the moment, two years from now, at least, If you would come in for a new experiment, then eight to 10 years ago with a trick or treaters, you would get more questions. 15:12:29 If I was changing and then if you look at all the advances in electronics and micro electronics in a six, in a sense, leader. 15:12:41 Just what is it AI. 15:12:43 Advanced Computing. 15:12:45 I personally think if we are thinking about an EC 1015 years from now, any of the few or two years from now will question as if we come in with a trigger Rita, my view. 15:12:50 So think about that from five years ago. Huge difference so they'll these chains, keep working. Because what I would say. 15:13:02 Looking I'll keep plugging away and keep sharing information that we have, we are incredible shape we have all these test sets. All these as we have as Phoenix we've all also Jayla. 15:13:16 We have other experts around the world we have our international collaborators working, what's not to like, yeah, that prototype experiment as some of the experiments a little bit expensive but still wrong. 15:13:28 Next time we enlist us a motivational speaker so yes 15:13:36 he did Tyrol if that's what you need. 15:13:36 I think I mean that's that's the last question I had, I think I'm just on that. 15:13:43 We should all go out and really talk to the protocol operations if you're not already doing that and and make sure that they are aware of this and have someone to talk to, if they have questions, and 15:13:56 work with them. I was asked that if you look at my own house and Jefferson Lab, we have the same issue in the future of some extent, we're going to rely on this. 15:14:06 Don't get yourself so I mean it's the same. So we all work we have to work on all this together. 15:14:15 It's not just a yes he is that'd be needed anyway. Yeah, that's true. That's true. 15:14:21 And it's just an evolution of what we had before right if you look at it. 15:14:26 It's the logical conclusion of a, of a development of the last 20 years and sort of how trigger readout evolved. 15:14:39 For some is that if you look at our colleagues, let's say, let's take a whole week last two hours now, I mean a piece of their high luminosity upgrade at least what they have in mind living life on this of course they're going to push our look at our 15:14:52 our colleagues in Accra structures and a piece of their readout of the domain object relational does to get our best science. So, We're not alone here. 15:15:06 All right. Any more questions to arrive, or any more discussions. 15:15:15 And I didn't solve anything here by the way some of these questions, 15:15:21 helped us a lot understand what a project. So, so it was very very good that you're right here and I think we all learned a lot and understand the situation better. 15:15:33 Thank you again. That was really really really helpful. 15:15:40 Maybe melody comes to any other business. 15:15:45 And the the other business ever have. 15:15:48 We should certainly have the next meeting in half a year. 15:15:51 Yes. 15:15:56 So, 15:15:56 question is where and when Hawaii, and have a year. 15:16:07 that 15:16:07 Raska we can't ask for money for that. Yeah. 15:16:15 That's the hat is that 15:16:15 we can always organize a virtual business it is not that expensive normally and people only way to get our travel from their own institution if we're allowed to travel. 15:16:35 Yeah, no, that's, yeah, we still have a lot of federal funding actually from the idea so we 15:16:31 could do some things, but no, it would be I mean, 15:16:37 I guess the next in society would be jail up again. 15:16:41 Right. 15:16:42 If you do that, but if somebody else is interested, I'm happy to support any. 15:16:53 Well, Okay, maybe we don't need to decide here. 15:17:00 But the next streaming readout session, and maybe somebody else wants to host that if it's if it is at j lab. 15:17:12 You know, maybe it doesn't have to be at Christopher Newport or it can be at Virginia or somewhere nearby. 15:17:25 Also I think we should just maybe I mean I can it's not, not, not to sin, not a trailer but i think you know like we have been located in the thing around. 15:17:34 I mean, you know, maybe I'll quit friends but at some point the interested and, you know, I don't think we should have this kind of exclusivity clause and this year right. 15:17:48 Yes, definitely. No, I think I mean that's that's the call right everybody thing, think about who wants to organize it right and then talk to us, and if we have more than one way we can make a decision, and maybe ask around or something. 15:18:05 It doesn't have to be a lab, either right it could be any university so I don't know if Chris if you're interested in the with come down to Kentucky haven't been there yet. 15:18:17 That'd be wonderful. I mean, in six months one by traveling at one point I would make you know just it, extending it one more iteration is is that right around this time next year is the chip conference in Norfolk on, and I don't know whether we want 15:18:35 to somehow tie the workshop to people potentially coming to to that conference and. 15:18:43 And it was actually a good idea I think that would be really nice. Yeah. Yeah. 15:18:54 Yeah, I'm telling you what the best benefit. That's what have iron. Now that's what I think. 15:19:01 But when the DNP because the DNP is that in Boston. 15:19:07 around then. 15:19:09 I haven't yet okay and then SS Nicholas 15:19:16 ss, ss MC is is virtual this this fall. Yes, everything is virtual end up end of September, something like that. 15:19:26 Normally it's the Halloween conference but I don't have See, it's actually a little bit early, I thought, Okay. 15:19:34 Well we're thinking that the DMP meeting is going to be hybrid partially in person and partially virtual. So, till about 10 to 1411 to 14 I just found it too. 15:19:47 That's in October. Yeah, yeah. okay. So, end of October would work, 15:19:56 or beginning of 16 to 23rd of October. 15:20:00 Okay. 15:20:03 So we could do after 23rd men. Now, have it evident Boston. 15:20:10 Let me know let's put the MIT again like oh sorry you're talking about the other meeting. 15:20:18 Yeah, yeah. So it would be. 15:20:22 I don't think it should be Boston again anyway because we just had an organized by MIT. Also keep in mind that a good number of us right in the US, he will be involved one way or another vs December deadline for the proposals. 15:20:36 Yeah, yeah, I would, you know, maybe we can actually push it in a little bit later that we are not really interfering writers DMP and assess make you know like I think you're one half goes to one half or zero percent like there's all these people are 15:21:04 in some, and then you know like you have this mad scramble to finish whatever proposed writer will be the equivalent of what what happened to see yellow report. 15:21:04 Right. And so, I mean I will, I will not object to having it like after December like the second week in December, so I think it will not be traveling yet. 15:21:15 I mean I can, I cannot see that this is going to be any kind of physical trouble. 15:21:23 I would agree with, with Martin I didn't know what to do. He's gonna release fighter let us know how to, how to travel and do all those things but maybe somebody also if we skip it or we move along farther, that there could be a reported chap or reported, 15:21:38 you know, an SS from the consortium right somebody that puts together a talk and because people know about it but it'd be nice that you get a wider audience at those those conferences, I just chapter aka Dave what's the deadline for submission. 15:22:00 I think submissions are starting in August, and they're probably going to be sometime mid. 15:22:07 I don't know when when I have to go back and look, know what was one of the organizers eight submissions is August, and then we make a decision, just before Christmas on who's accepted for papers. 15:22:23 And then the conference itself is may mid May next year. 15:22:28 Now as I was asking you said that drives in the opposite direction that you probably want to have a meeting before I because then you can discuss what you're going to submit, if you want to make it more concentrated 15:22:43 party just moved to January meeting. 15:22:49 That's long that would fit me better because I will be in Europe for the end of the year for from us. 15:22:57 So, generally I should be back. 15:23:04 But if we can agree that the moving it out a little bit I mean I can I see the studying into a mad scramble in you know like before December. 15:23:14 I agree, I agree. I'm looking at a bit earlier, we could say okay let's let's meet, I don't know in early September. 15:23:22 Right. 15:23:25 But let's maybe not enough time to get a progress week of December six looks pretty open to me. 15:23:36 Yeah, I might I might be online then. 15:23:40 Yep. 15:23:41 Okay, well okay so if we generally agree that want to move it out I mean like you know I don't care if it's December or January or something but something like that. 15:23:50 I think that would sort of like open the floor to people looking at their calendar and seeing if it fits into heroes institutional plan right i mean you know like if we are getting some you know and and the other thing was how why I mean like when it's 15:24:08 really safe to travel then I would just lean on people like is our Gary or somebody like that to have the similar place that we really all want to go to. 15:24:18 And when is the next joint meeting from APS GPS is that next year. No, it's in 23, Devon. 15:24:27 Next year, it, the meetings in New Orleans and. 15:24:33 And then it's Japan. 15:24:35 Okay. 15:24:39 That's I think five years from now it's every four is. 15:24:42 Well then it should be 22 because it was 18. Yeah, but because of covert they Japan agreed and just shifted and, okay. It was supposed to be New Orleans, this year. 15:24:55 So, it's normally every four years but this time five years. 15:24:59 All right, make sense. 15:25:02 Yeah, yeah, no, then, then, I think that's the plan. So, again, everybody. Talk to your institute, see if you want to organize it and tell us if you are really, and then we can find out. 15:25:19 We'll do it for we have any other business, other other business. I'm just mad at me. 15:25:31 By you look I want to thank all of the speakers, I think there's was again a very successful meeting. 15:25:37 Interesting stuff. 15:25:40 I really hope the next meeting will be in person. 15:25:43 I'm getting really sick. 15:25:47 Looking at my house. 15:25:50 So, normally we would have a meeting on for streaming readout consortium or whatever. On May 3 I, I move we cancel that. 15:26:05 Yeah, I agree. 15:26:09 So the next meeting will be at the beginning of June, which is the seventh. 15:26:23 Sounds like a plan. Okay. 15:26:28 Any, any last comments vicious power stories.