forked from drummondlab/drummondlab.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
atom.xml
628 lines (517 loc) · 92.4 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Niemeyer Research Group</title>
<link href="https://niemeyer-research-group.github.io//" rel="self"/>
<link href="https://niemeyer-research-group.github.io/"/>
<updated>2020-06-17T13:51:24+00:00</updated>
<id>https://niemeyer-research-group.github.io/</id>
<author>
<name>Kyle E. Niemeyer</name>
<email>kyle.niemeyer@oregonstate.edu</email>
</author>
<entry>
<title>Applying the swept rule for solving explicit partial differential equations on heterogeneous computing systems</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/swept-heterogeneous"/>
<updated>2020-05-30T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/swept-heterogeneous</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Applications that exploit the architectural details of high-performance computing (HPC) systems have become increasingly invaluable in academia and industry over the past two decades. The most important hardware development of the last decade in HPC has been the General Purpose Graphics Processing Unit (GPGPU), a class of massively parallel devices that now contributes the majority of computational power in the top 500 supercomputers. As these systems grow, small costs such as latency—due to the fixed cost of memory accesses and communication—accumulate in a large simulation and become a significant barrier to performance. The swept time-space decomposition rule is a communication-avoiding technique for time-stepping stencil update formulas that attempts to reduce latency costs. This work extends the swept rule by targeting heterogeneous, CPU/GPU architectures representing current and future HPC systems. We compare our approach to a naive decomposition scheme with two test equations using an MPI+CUDA pattern on 40 processes over two nodes containing one GPU. The swept rule produces a factor of 1.9 to 23 speedup for the heat equation and a factor of 1.1 to 2.0 speedup for the Euler equations, using the same processors and work distribution, and with the best possible configurations. These results show the potential effectiveness of the swept rule for different equations and numerical schemes on massively parallel computing systems that incur substantial latency costs.</p>
</content>
</entry>
<entry>
<title>A fast, low-memory, and stable algorithm for implementing multicomponent transport in direct numerical simulations</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/multicomponent-diffusion-method"/>
<updated>2020-01-08T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/multicomponent-diffusion-method</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Implementing multicomponent diffusion models in reacting-flow simulations is computationally expensive due to the challenges involved in calculating diffusion coefficients. Instead, mixture-averaged diffusion treatments are typically used to avoid these costs. However, to our knowledge, the accuracy and appropriateness of the mixture-averaged diffusion models has not been verified for three-dimensional turbulent premixed flames. In this study we propose a fast, efficient, low-memory algorithm and use that to evaluate the role of multicomponent mass diffusion in reacting-flow simulations. Direct numerical simulation of these flames is performed by implementing the Stefan–Maxwell equations in NGA. A semi-implicit algorithm decreases the computational expense of inverting the full multicomponent ordinary diffusion array while maintaining accuracy and fidelity. We first verify the method by performing one-dimensional simulations of premixed hydrogen flames and compare with matching cases in Cantera. We demonstrate the algorithm to be stable, and its performance scales approximately with the number of species squared. Then, as an initial study of multicomponent diffusion, we simulate premixed, three-dimensional turbulent hydrogen flames, neglecting secondary Soret and Dufour effects. Simulation conditions are carefully selected to match previously published results and ensure valid comparison. Our results show that using the mixture-averaged diffusion assumption leads to a 15% under-prediction of the normalized turbulent flame speed for a premixed hydrogen-air flame. This difference in the turbulent flame speed motivates further study into using the mixture-averaged diffusion assumption for DNS of moderate-to-high Karlovitz number flames.</p>
</content>
</entry>
<entry>
<title>AJ successfully defended his PhD</title>
<link href="https://niemeyer-research-group.github.io//news/af-defended"/>
<updated>2019-12-16T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/af-defended</id>
<content type="html"><p><a href="/team/aj-fillo">AJ</a> successfully defended his PhD dissertation—congratulations Dr. Fillo!</p>
</content>
</entry>
<entry>
<title>pyMARS: automatically reducing chemical kinetic models in Python</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/pymars-joss"/>
<updated>2019-09-08T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/pymars-joss</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Python-based (chemical kinetic) Model Automatic Reduction Software (pyMARS) implements multiple techniques for reducing the size and complexity of detailed chemical kinetic models.</p>
</content>
</entry>
<entry>
<title>Phil featured in Honors Link</title>
<link href="https://niemeyer-research-group.github.io//news/phil-featured"/>
<updated>2019-07-12T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/phil-featured</id>
<content type="html"><p><a href="/team/phillip-mestas">Phil</a> was <a href="http://blogs.oregonstate.edu/honorslink/2019/06/27/developing-a-career-honors-student-works-second-internship-with-google/">featured in the Oregon State Honors Link</a>
about his second internship with Google!</p>
</content>
</entry>
<entry>
<title>Reduced Gas-Phase Kinetic Models for Burning of Douglas Fir</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/douglas-fir-reduced-model"/>
<updated>2019-07-09T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/douglas-fir-reduced-model</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>New skeletal chemical kinetic models have been obtained by reducing a detailed model for the gas-phase combustion of Douglas Fir pyrolysis products. The skeletal models are intended to reduce the cost of high-resolution wildland fire simulations, without substantially affecting accuracy. The reduction begins from a 137 species, 4,533 reaction detailed model for combustion of gas-phase biomass pyrolysis products, and is performed using the directed relation graph with error propagation and sensitivity analysis method, followed by further reaction elimination. The reduction process tracks errors in the ignition delay time and peak temperature for combustion of gas-phase products resulting from the pyrolysis of Douglas Fir. Three skeletal models are produced as a result of this process, corresponding to a larger 71 species, 1,179 reaction model with 1% error in ignition delay time compared to the detailed model, an intermediate 54 species, 637 reaction model with 24% error, and a smaller 54 species, 204 reaction model with 80% error. Using the skeletal models, peak temperature, volumetric heat release rate, premixed laminar flame speed, and diffusion flame extinction temperatures are compared with the detailed model, revealing an average maximum error in these metrics across all conditions considered of less than 1% for the larger skeletal model, 10% for the intermediate model, and 24% for the smaller model. All three skeletal models are thus sufficiently accurate and computationally efficient for implementation in high-resolution wildland fire simulations, where other model errors and parametric uncertainties are likely to be greater than the errors introduced by the reduced kinetic models presented here.</p>
</content>
</entry>
<entry>
<title>The community atmospheric chemistry box model CAABA/MECCA-4.0</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/CAABA-MECCA"/>
<updated>2019-03-05T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/CAABA-MECCA</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>We present version 4.0 of the atmospheric chemistry box model CAABA/MECCA that now includes a number of new features: (i) skeletal mechanism reduction, (ii) the Mainz Organic Mechanism (MOM) chemical mechanism for volatile organic compounds, (iii) an option to include reactions from the Master Chemical Mechanism (MCM) and other chemical mechanisms, (iv) updated isotope tagging, and (v) improved and new photolysis modules (JVAL, RADJIMT, DISSOC). Further, when MECCA is connected to a global model, the new feature of coexisting multiple chemistry mechanisms (PolyMECCA/CHEMGLUE) can be used. Additional changes have been implemented to make the code more user-friendly and to facilitate the analysis of the model results. Like earlier versions, CAABA/MECCA-4.0 is a community model published under the GNU General Public License.</p>
</content>
</entry>
<entry>
<title>Predicting fuel low-temperature combustion performance using Fourier-transform infrared absorption spectra of neat hydrocarbons</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/FTIR-LTC"/>
<updated>2019-01-17T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/FTIR-LTC</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>This work uses support vector machine regression to correlate infrared absorption spectra to a metric representing low temperature combustion engine (LTC) performance, the LTC index: a singular value encapsulating achievable engine loads, combustion phasing, and efficiency. 313 total fuels informed the model, including mixtures and surrogate gasoline fuels containing n-heptane, isooctane (i.e., 2,2,4-trimethylpentane), toluene, ethanol, methylcyclohexane, xylene(s), 2-methybutane, and 2-methylhexane. We predicted LTC indices of the FACE (Fuels for Advanced Combustion Engines) gasolines A–J within ±6.0 units. The proposed methodology can be used to both predict gasoline LTC performance and also identify important hydrocarbon components that most improve (or reduce) LTC engine performance.</p>
</content>
</entry>
<entry>
<title>The principles of tomorrow's university</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/tomorrows-university"/>
<updated>2019-01-03T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/tomorrows-university</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>In the 21st Century, research is increasingly data- and computation-driven. Researchers, funders, and the larger community today emphasize the traits of openness and reproducibility. In March 2017, 13 mostly early-career research leaders who are building their careers around these traits came together with ten university leaders (presidents, vice presidents, and vice provosts), representatives from four funding agencies, and eleven organizers and other stakeholders in an NIH- and NSF-funded one-day, invitation-only workshop titled “Imagining Tomorrow’s University.” Workshop attendees were charged with launching a new dialog around open research – the current status, opportunities for advancement, and challenges that limit sharing.</p>
<p>The workshop examined how the internet-enabled research world has changed, and how universities need to change to adapt commensurately, aiming to understand how universities can and should make themselves competitive and attract the best students, staff, and faculty in this new world. During the workshop, the participants re-imagined scholarship, education, and institutions for an open, networked era, to uncover new opportunities for universities to create value and serve society. They expressed the results of these deliberations as a set of 22 principles of tomorrow’s university across six areas: credit and attribution, communities, outreach and engagement, education, preservation and reproducibility, and technologies.</p>
<p>Activities that follow on from workshop results take one of three forms. First, since the workshop, a number of workshop authors have further developed and published their white papers to make their reflections and recommendations more concrete. These authors are also conducting efforts to implement these ideas, and to make changes in the university system. Second, we plan to organise a follow-up workshop that focuses on how these principles could be implemented. Third, we believe that the outcomes of this workshop support and are connected with recent theoretical work on the position and future of open knowledge institutions.</p>
</content>
</entry>
<entry>
<title>Effects of Langmuir turbulence on upper ocean carbonate chemistry</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/langmuir-carbonate-chemistry"/>
<updated>2018-11-08T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/langmuir-carbonate-chemistry</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Effects of wave-driven Langmuir turbulence on the air-sea flux of carbon dioxide (CO<sub>2</sub>) are examined using large eddy simulations featuring actively reacting carbonate chemistry in the ocean mixed layer at small scales. Four strengths of Langmuir turbulence are examined with three types of carbonate chemistry: time-dependent, instantaneous equilibrium chemistry, and no reactions. The time-dependent model is obtained by reducing a detailed eight-species chemical mechanism using computational singular perturbation analysis, resulting in a quasi-steady-state approximation for hydrogen ion (H<sup>+</sup>), i.e., fixed pH. The reduced mechanism is then integrated in two half-time steps before and after the advection solve using a Runge–Kutta–Chebyshev scheme that is robust for stiff systems of differential equations. The simulations show that, as the strength of Langmuir turbulence increases, CO<sub>2</sub> fluxes are enhanced by rapid overturning of the near-surface layer, which rivals the removal rate of CO<sub>2</sub> by time-dependent reactions. Equilibrium chemistry and non-reactive models are found to bring more and less carbon, respectively, into the ocean as compared to the more realistic time-dependent model. These results have implications for Earth system models that either neglect Langmuir turbulence or use equilibrium, instead of time-dependent, chemical mechanisms.</p>
</content>
</entry>
<entry>
<title>Using SIMD and SIMT vectorization to evaluate sparse chemical kinetic Jacobian matrices and thermochemical source terms</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/simd-simt-pyjac2"/>
<updated>2018-09-04T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/simd-simt-pyjac2</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Accurately predicting key combustion phenomena in reactive-flow simulations, e.g., lean blow-out, extinction/ignition limits and pollutant formation, necessitates the use of detailed chemical kinetics. The large size and high levels of numerical stiffness typically present in chemical kinetic models relevant to transportation/power-generation applications make the efficient evaluation/factorization of the chemical kinetic Jacobian and thermochemical source-terms critical to the performance of reactive-flow codes. Here we investigate the performance of vectorized evaluation of constant-pressure/volume thermochemical source-term and sparse/dense chemical kinetic Jacobians using single-instruction, multiple-data (SIMD) and single-instruction, multiple thread (SIMT) paradigms. These are implemented in pyJac, an open-source, reproducible code generation platform. Selected chemical kinetic models covering the range of sizes typically used in reactive-flow simulations were used for demonstration. A new formulation of the chemical kinetic governing equations was derived and verified, resulting in Jacobian sparsities of 28.6–92.0% for the tested models. Speedups of 3.40–4.08× were found for shallow-vectorized OpenCL source-rate evaluation compared with a parallel OpenMP code on an avx2 central processing unit (CPU), increasing to 6.63–9.44× and 3.03–4.23 × for sparse and dense chemical kinetic Jacobian evaluation, respectively. Furthermore, the effect of data-ordering was investigated and a storage pattern specifically formulated for vectorized evaluation was pro- posed; as well, the effect of the constant pressure/volume assumptions and varying vector widths were studied on source-term evaluation performance. Speedups reached up to 17.60× and 45.13× for dense and sparse evaluation on the GPU, and up to 55.11× and 245.63× on the CPU over a first-order finite-difference Jacobian approach. Further, dense Jacobian evaluation was up to 19.56× and 2.84× times faster than a previous version of pyJac on a CPU and GPU, respectively. Finally, future directions for vectorized chemical kinetic evaluation and sparse linear-algebra techniques were discussed.</p>
</content>
</entry>
<entry>
<title>AJ featured in Gazette-Times</title>
<link href="https://niemeyer-research-group.github.io//news/aj-featured-newspaper"/>
<updated>2018-07-09T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/aj-featured-newspaper</id>
<content type="html"><p><a href="/team/aj-fillo">AJ</a> was <a href="https://www.gazettetimes.com/news/local/osu-grad-student-debuts-new-science-video-on--d/article_322869b4-1acf-554f-93c2-2016bbfc9248.html">featured in the <em>Corvallis Gazette-Times</em></a> for his recent <a href="http://www.liblabscience.com">Lib Lab</a> episode on <a href="https://youtu.be/_fwziIJPwMs">3D printing of metal</a>.</p>
</content>
</entry>
<entry>
<title>Effects of fuel content and density on the smoldering characteristics of cellulose and hemicellulose</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/smoldering-exp-comp-poci"/>
<updated>2018-07-08T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/smoldering-exp-comp-poci</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Smoldering combustion in wildland fires poses hazards for both ecosystems and humans by destroying biomass, transitioning to flaming combustion, and releasing significant quantities of pollution. Understanding the parameters that control smoldering is necessary to help predict and potentially mitigate these hazards. A challenge in identifying these parameters is the wide variety of biomasses which occur in nature. The objective of this study is to identify the effects of density and fuel concentration on the smoldering characteristics of cellulose and hemicellulose mixtures. These fuels were considered because they are some of the major organic constituents within biomass. To this end, downward smoldering propagation velocities were measured for 50%, 75%, and 100% cellulose content at densities varying from 170 to 400 kg/m3 ). The horizontal smoldering propagation velocities and temperature distributions were also determined for loosely packed samples ranging from 100% to 0% cellulose (with residual hemicellulose). Additionally, horizontal smoldering propagation velocities were determined for systematically varied ratios of cellulose (50% to 100%) and densities (200 to 400kg/m3). The fuel was burned in an insulated reactor box. An infrared camera measured the horizontal propagation velocity, and thermocouples measured the downward propagation. A one-dimensional reactive porous media model with reduced chemistry was used to identify key processes causing the observed sensitivities. At constant packing density, the propagation velocity increased as cellulose content decreased because of decreased heat release with increased cellulose content and the earlier onset of hemicellulose pyrolysis. The propagation velocity decreased with respect to packing density when the fuel content was constant because of reduced oxygen diffusion. The propagation velocity increased with cellulose content when the fuel was loosely packed because of the decreasing density.</p>
</content>
</entry>
<entry>
<title>Computational study of the effects of density, fuel content, and moisture content on smoldering propagation of cellulose and hemicellulose mixtures</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/smoldering-mixtures-poci"/>
<updated>2018-06-24T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/smoldering-mixtures-poci</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Smoldering combustion plays an important role in forest and wildland fires. Fires from smoldering combustion can last for long periods of time, emit more pollutants, and be difficult to extinguish. This makes the study of smoldering in woody fuels and forest duff important. Cellulose, hemicellulose, and lignin are the major constituents in these type of fuels, in different proportions for different fuels. In this paper, we developed a 1-D model using the open-source software Gpyro to study the smoldering combustion of cellulose and hemicellulose mixtures. We first validated our simulations against experimentally obtained values of propagation speed for mixtures with fuel compositions including 100%, 75%, 50%, and 25% cellulose, with the remaining proportion of hemicellulose. Then, we studied the effects of varying fuel composition, density, and moisture content on smoldering combustion. We find that propagation speed of smoldering increased with decreases in density and increases in hemicellulose content, which we attribute to the role of oxygen diffusion. Propagation speed increased with moisture content for pure cellulose up to a certain limiting value, after which the propagation speed dropped by up to 70%. The mean peak temperature of smoldering increased with increases in hemicellulose content and density, and decreased with increasing moisture content.</p>
</content>
</entry>
<entry>
<title>NRG group MS defenses</title>
<link href="https://niemeyer-research-group.github.io//news/dan-andrew-luz-defended"/>
<updated>2018-06-20T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/dan-andrew-luz-defended</id>
<content type="html"><p>NRG members <a href="/team/dan-magee">Dan</a>, <a href="/team/andrew-alferman">Andrew</a>, and <a href="/team/luz-pacheco">Luz</a> successfully defended their MS degrees—congratulations!</p>
</content>
</entry>
<entry>
<title>FACE gasoline surrogates formulated by an enhanced multivariate optimization framework</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/FACE-surrogates"/>
<updated>2018-06-19T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/FACE-surrogates</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Design and optimization of higher efficiency, lower-emission internal combustion engines are highly dependent on fuel chemistry. Resolving chemistry for complex fuels, like gasoline, is challenging. A solution is to study a fuel surrogate: a blend of a small number of well-characterized hydrocarbons to represent real fuels by emulating their thermophysical and chemical kinetics properties. In the current study, an existing gasoline surrogate formulation algorithm is further enhanced by incorporating novel chemometric models. These models use infrared spectra of hydrocarbon fuels to predict octane numbers, and are valid for a wide array of neat hydrocarbons and mixtures of such. This work leverages 14 hydrocarbon species to form tailored surrogate palettes for the Fuels for Advanced Combustion Engine (FACE) gasolines, including candidate component species not previously considered: n-pentane, 2-methylpentane, 1-pentene, cyclohexane, and o-xylene. We evaluate the performance of “full” and “reduced” surrogates for the 10 fuels for advanced combustion engine (FACE) gasolines, containing between 8–12 and 4–7 components, respectively. These surrogates match the target properties of the real fuels, on average, within 5%. This close agreement demonstrates that the algorithm can design surrogates matching the wide array of target properties: octane numbers, density, hydrogen-to-carbon ratio, distillation characteristics, and proportions of carbon–carbon bond types. We also compare our surrogates to those available in literature (FACE gasolines A, C, F, G, I and J). Our surrogates for these fuels, on average, better-match RON, MON, and distillation characteristics within 0.5%, 0.7%, and 0.9%, respectively, with literature surrogates at 1.2%, 1.1%, and 1.8% error. However, our surrogates perform slightly worse for density, hydrogen-to-carbon ratio, and carbon–carbon bond types at errors of 3.3%, 6.8%, and 2.2% with literature surrogates at 1.3%, 2.3%, and 1.9%. Overall, the approach demonstrated here offers a promising method to better design surrogates for gasoline-like fuels with a wide array of properties.</p>
</content>
</entry>
<entry>
<title>The case for openness in engineering research</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/openness-engineering"/>
<updated>2018-04-26T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/openness-engineering</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>In this article, we review the literature on the benefits, and possible downsides, of openness in engineering research. We attempt to examine the issue from multiple perspectives, including reasons and motivations for introducing open practices into an engineering researcher’s workflow and the challenges faced by scholars looking to do so. Further, we present our thoughts and reflections on the role that open engineering research can play in defining the purpose and activities of the university. We have made some specific recommendations on how the public university can recommit to and push the boundaries of its role as the creator and promoter of public knowledge. In doing so, the university will further demonstrate its vital role in the continued economic, social, and technological development of society. We have also included some thoughts on how this applies specifically to the field of engineering and how a culture of openness and sharing within the engineering community can help drive societal development.</p>
</content>
</entry>
<entry>
<title>Analysis of an approach for detecting arc positions during vacuum arc remelting based on magnetic flux density measurements</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/jmse-aps-analysis"/>
<updated>2018-02-24T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/jmse-aps-analysis</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Vacuum arc remelting (VAR) is a melting process for the production of homogeneous ingots, achieved by applying a direct current to create electrical arcs between the input electrode and the resultant ingot. Arc behavior drives quality of the end product, but no methodology is currently used in VAR furnaces at large scale to track arcs in real time. An arc position sensing (APS) technology was recently developed as a methodology to predict arc locations using magnetic field values measured by sensors. This system couples finite element analysis of VAR furnace magnetostatics with direct magnetic field measurements to predict arc locations. However, the published APS approach did not consider the effect of various practical issues that could affect the magnetic field distribution and thus arc location predictions. In this paper, we studied how altering assumptions made in the finite element model affect arc location predictions. These include the vertical position of the sensor relative to the electrode-ingot gap, a varying electrode-ingot gap size, ingot shrinkage, and the use of multiple sensors rather than a single sensor. Among the parameters studied, only vertical distance between arc and sensor locations causes large sources of error, and should be considered further when applying an APS system. However, averaging the predicted locations from four evenly spaced sensors helps reduce this error to no more than 16% for a sensor position varying from 0.508 m below and above the electrode-ingot gap height.</p>
</content>
</entry>
<entry>
<title>Accelerating finite-rate chemical kinetics with coprocessors: Comparing vectorization methods on GPUs, MICs, and CPUs</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/cpc-chemistry-vectorization"/>
<updated>2018-02-23T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/cpc-chemistry-vectorization</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge–Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge–Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%–35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5–2.7x) and Xeon Phi coprocessor (4.7–4.9x) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5–1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.</p>
</content>
</entry>
<entry>
<title>Journal of Open Source Software (JOSS): design and first-year review</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/joss-peerjcs"/>
<updated>2018-02-12T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/joss-peerjcs</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>This article describes the motivation, design, and progress of the Journal of Open Source Software (JOSS). JOSS is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, JOSS addresses the dearth of rewards for key contributions to science made in the form of software. JOSS publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A JOSS article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a JOSS submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, JOSS does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, JOSS gives it a digital object identifier (DOI), deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their JOSS article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, JOSS published 111 articles, with more than 40 additional articles under review. JOSS is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative (OSI).</p>
</content>
</entry>
<entry>
<title>Fourth Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE4)</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/wssspe4-jors"/>
<updated>2018-02-10T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/wssspe4-jors</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>This article summarizes motivations, organization, and activities of the Fourth Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE4). The WSSSPE series promotes sustainable research software by positively impacting principles and best practices, careers, learning, and credit. This article discusses the code of conduct; the mission and vision statements that were drafted at the workshop and finalized shortly after it; the keynote and idea papers, position papers, experience papers, demos, and lightning talks presented during the workshop; and a panel discussion on best practices. The main part of the article discusses the set of working groups that formed during the meeting, along with contact information for readers who may want to join a group. Finally, it discusses a survey of the workshop attendees.</p>
</content>
</entry>
<entry>
<title>ChemKED: A human- and machine-readable data standard for chemical kinetics experiments</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/chemked-ijck"/>
<updated>2018-01-25T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/chemked-ijck</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Fundamental experimental measurements of quantities such as ignition delay times, laminar flame speeds, and species profiles (among others) serve important roles in understanding fuel chemistry and validating chemical kinetic models. However, despite both the importance and abundance of such information in the literature, the community lacks a widely adopted standard format for this data. This impedes both sharing and wide use by the community. Here we introduce a new chemical kinetics experimental data format, ChemKED and the related Python-based package for validating and working with ChemKED-formatted files called PyKED. We also review past and related efforts and motivate the need for a new solution. ChemKED currently supports the representation of autoignition delay time measurements from shock tubes and rapid compression machines. ChemKED-formatted files contain all of the information needed to simulate experimental data points, including the uncertainty of the data. ChemKED is based on the YAML data serialization language and is intended as a human- and machine-readable standard for easy creation and automated use. Development of ChemKED and PyKED occurs openly on GitHub under the BSD 3-clause license, and contributions from the community are welcome. Plans for future development include support for experimental data from laminar flame, jet-stirred reactor, and speciation measurements.</p>
</content>
</entry>
<entry>
<title>Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time–space decomposition</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/GPU-swept-rule-1D"/>
<updated>2018-01-09T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/GPU-swept-rule-1D</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time—even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time–space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2–9x for a range of problem sizes, respectively, compared with simple GPU versions and 7–300x compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2–1.9x worse than a standard implementation for all problem sizes.</p>
</content>
</entry>
<entry>
<title>Assessing impacts of discrepancies in model parameters on autoignition model performance: A case study using butanol</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/model-parameter-discrepancy"/>
<updated>2018-01-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/model-parameter-discrepancy</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Side-by-side comparison of detailed kinetic models using a new tool to aid recognition of species structures reveals significant discrepancies in the published rates of many reactions and thermochemistry of many species. We present a first automated assessment of the impact of these varying parameters on observable quantities of interest—in this case, autoignition delay—using literature experimental data. A recent kinetic model for the isomers of butanol was imported into a common database. Individual reaction rate and thermodynamic parameters of species were varied using values encountered in combustion models from recent literature. The effects of over 1600 alternative parameters were considered. Separately, experimental data were collected from recent publications and converted into the standard YAML-based ChemKED format. The Cantera-based model validation tool, PyTeCK, was used to automatically simulate autoignition using the generated models and experimental data, to judge the performance of the models. Taken individually, most of the parameter substitutions have little effect on the overall model performance, although a handful have quite large effects, and are investigated more thoroughly. Additionally, models varying multiple parameters simultaneously were evolved using a genetic algorithm to give fastest and slowest autoignition delay times, showing that changes exceeding a factor of 10 in ignition delay time are possible by cherry-picking from only accepted, published parameters. All data and software used in this study are available openly.</p>
</content>
</entry>
<entry>
<title>AJ in Gazette-Times</title>
<link href="https://niemeyer-research-group.github.io//news/liblab-local-news"/>
<updated>2017-12-20T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/liblab-local-news</id>
<content type="html"><p>NRG PhD candidate <a href="/team/aj-fillo">AJ</a> was featured in an <a href="http://www.gazettetimes.com/news/local/library-science-videos-do-a-deep-dive-on-pressure/article_a9476d78-5146-593e-9f65-bb1a842252e8.html">article in the <em>Corvallis Gazette-Times</em></a> about filming an episode for his <a href="https://www.youtube.com/channel/UCQvOIwhriBXNMaxXgLl4PPA/featured">LIB LAB</a> video series at the <a href="http://aquarium.org">Oregon Coast Aquarium</a>.
This episode will be part of a collaboration with <a href="http://nicolesharp.com">Dr. Nicole Sharp</a>’s <a href="http://fyfluiddynamics.com">FY Fluid Dynamics</a> blog.</p>
</content>
</entry>
<entry>
<title>NRG in the news</title>
<link href="https://niemeyer-research-group.github.io//news/NRG-featured-news"/>
<updated>2017-10-21T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/NRG-featured-news</id>
<content type="html"><p>NRG projects and team members have been in the news lately!</p>
<ul>
<li>
<p>In October 2017, <a href="/team/aj-fillo">AJ</a> won the 2017 OSU Distinguished Master’s Thesis Award. His <a href="http://hdl.handle.net/1957/60072">thesis</a> reported and analyzed the turbulent flame speed of jet and jet-like fuels.</p>
</li>
<li>
<p>In April 2017, <a href="/team/kyle-niemeyer">Kyle</a> was featured in <a href="http://ecampus.oregonstate.edu/research/podcast/e56/">an episode</a> of the <a href="http://ecampus.oregonstate.edu/research/podcast/">Research in Action podcast</a>, where he discussed open science.</p>
</li>
<li>
<p>In March 2017, <a href="https://www.asme.org/engineering-topics/articles/manufacturing-design/visualizing-greener-specialty-metals">ASME featured a story</a> on arc position sensing technology for vacuum arc remelting furnaces, mentioning <a href="/team/miguel-soler">Miguel</a> and <a href="/team/kyle-niemeyer">Kyle</a>’s project funded by OregonBEST.</p>
</li>
<li>
<p>In December 2016, the <em>Corvallis Gazette-Times</em> <a href="http://www.gazettetimes.com/news/local/research-starts-off-with-a-bang/article_571a90b7-d8e9-5a1d-b8d6-81f92b6ea802.html">wrote about</a> <a href="/team/matt-zaiger">Matt</a> and <a href="/team/kyle-niemeyer">Kyle</a>’s NETL-funded project studying pulse detonation engines.</p>
</li>
<li>
<p>The College of Engineering’s <a href="http://www.journalgraphicsdigitalpublications.com/epubs/OSUALUMNIASSOCIATIONINC/EngineeringMomentumSpring2016/viewer/desktop/#page/14">Spring 2016 Momentum! magazine</a> featured <a href="/team/tejas-mulky">Tejas</a> and <a href="/team/kyle-niemeyer">Kyle</a>’s SERDP-funded project studying smoldering combustion.</p>
</li>
</ul>
</content>
</entry>
<entry>
<title>A multi-disciplinary perspective on emergent and future innovations in peer review</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/peer-review"/>
<updated>2017-07-20T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/peer-review</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Peer review of research articles is a core part of our scholarly communication system. In spite of its importance, the status and purpose of peer review is often contested. What is its role in our modern digital research and communications infrastructure? Does it perform to the high standards with which it is generally regarded? Studies of peer review have shown that it is prone to bias and abuse in numerous dimensions, frequently unreliable, and can fail to detect even fraudulent research. With the advent of Web technologies, we are now witnessing a phase of innovation and experimentation in our approaches to peer review. These developments prompted us to examine emerging models of peer review from a range of disciplines and venues, and to ask how they might address some of the issues with our current systems of peer review. We examine the functionality of a range of social Web platforms, and compare these with the traits underlying a viable peer review system: quality control, quantified performance metrics as engagement incentives, and certification and reputation. Ideally, any new systems will demonstrate that they out-perform current models while avoiding as many of the biases of existing systems as possible. We conclude that there is considerable scope for new peer review initiatives to be developed, each with their own potential issues and advantages. We also propose a novel hybrid platform model that, at least partially, resolves many of the technical and social issues associated with peer review, and can potentially disrupt the entire scholarly communication system. Success for any such development relies on reaching a critical threshold of research community engagement with both the process and the platform, and therefore cannot be achieved without a significant change of incentives in research environments.</p>
</content>
</entry>
<entry>
<title>An investigation of GPU-based stiff chemical kinetics integration methods</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/stiff-GPU-integrators"/>
<updated>2017-05-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/stiff-GPU-integrators</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>A fifth-order implicit Runge–Kutta method and two fourth-order exponential integration methods equipped with Krylov subspace approximations were implemented for the GPU and paired with the analytical chemical kinetic Jacobian software <code class="language-plaintext highlighter-rouge">pyJac</code>. The performance of each algorithm was evaluated by integrating thermochemical state data sampled from stochastic partially stirred reactor simulations and compared with the commonly used CPU-based implicit integrator <code class="language-plaintext highlighter-rouge">CVODE</code>. We estimated that the implicit Runge–Kutta method running on a single GPU is equivalent to <code class="language-plaintext highlighter-rouge">CVODE</code> running on 12–38 CPU cores for integration of a single global integration time step of 1e-6 s with hydrogen and methane models. In the stiffest case studied—the methane model with a global integration time step of 1e-4 s—thread divergence and higher memory traffic significantly decreased GPU performance to the equivalent of <code class="language-plaintext highlighter-rouge">CVODE</code> running on approximately three CPU cores. The exponential integration algorithms performed more slowly than the implicit integrators on both the CPU and GPU. Thread divergence and memory traffic were identified as the main limiters of GPU integrator performance, and techniques to mitigate these issues were discussed. Use of a finite-difference Jacobian on the GPU—in place of the analytical Jacobian provided by <code class="language-plaintext highlighter-rouge">pyJac</code>—greatly decreased integrator performance due to thread divergence, resulting in maximum slowdowns of 7.11–240.96 times; in comparison, the corresponding slowdowns on the CPU were just 1.39–2.61 times, underscoring the importance of use of an analytical Jacobian for efficient GPU integration. Finally, future research directions for working towards enabling realistic chemistry in reactive-flow simulations via GPU/SIMD-accelerated stiff chemical kinetic integration were identified.</p>
</content>
</entry>
<entry>
<title>Effects of oil and water contamination on natural gas engine combustion processes</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/oil-water-natural-gas-engine"/>
<updated>2017-05-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/oil-water-natural-gas-engine</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Abundant availability and potential for lower emissions are drivers for increased utilization of natural gas in automotive engines for transportation applications. A novel bimodal engine has been developed that allows on-board refueling of natural gas by utilizing the engine as a compressor. Engine compression however, results in altering the initial state of the natural gas. Increase in temperature and addition of oil are two key effects attributed to the onboard refueling process. A secondary effect is the presence of water in the natural gas supply line. This study investigates the effect of upstream conditions of natural gas on three parameters: autoignition temperature, ignition delay, and laminar flame speed. These parameters play key roles in the engine combustion process. Parametric studies are conducted by varying the initial mixture temperature, water, and oil content in the fuel. The studies utilize numerical simulations conducted with detailed chemistry for natural gas with n-heptane used as a surrogate for oil. Water addition to natural gas at 1–5% by volume did not result in any major changes in the combustion processes, other than a slight reduction in laminar flame speeds. Oil addition of 1–5% by volume reduced autoignition temperature by 5–10% and ignition delay by 27–95% depending on the initial temperature. Sensitivity analysis showed that this was likely due to decrease in the sensitivity of two recombination reactions with oil addition. Evolution profiles of key radical species also showed increasing mole fraction of the hydroperoxy radical at lower temperature that likely aids in reducing the ignition delay. Oil addition resulted in a relatively small increase in the laminar flame speed of 1.7% along with an increase in the adiabatic flame temperature. These results help inform the combustion process and performance to be expected from the bimodal engine.</p>
</content>
</entry>
<entry>
<title>AJ in the news</title>
<link href="https://niemeyer-research-group.github.io//news/pilot-liblab-video"/>
<updated>2017-03-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/pilot-liblab-video</id>
<content type="html"><p><a href="/team/aj-fillo">AJ Fillo</a>, PhD candidate in the NRG, developed and starred in the <a href="https://www.youtube.com/watch?v=H96Xr0Efelk">pilot LIB LAB episode</a> episode, an educational STEAM outreach video on propulsion.</p>
<p>AJ was also recently featured in a <a href="http://mime.oregonstate.edu/fanning-flames">story on the MIME website</a> about both his research and the LIB LAB video series.</p>
</content>
</entry>
<entry>
<title>pyJac: Analytical Jacobian generator for chemical kinetics</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/pyjac-paper"/>
<updated>2017-02-14T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/pyjac-paper</id>
<content type="html">
<h1 id="brief-description">Brief description</h1>
<p>pyJac is an open-source Python-based tool that produces analytical Jacobian
matrices for chemical kinetics differential equations. This paper describes the
theory behind pyJac, including derivation of the chemical kinetic Jacobian and
all necessary components, pyJac’s implementation, verification of pyJac’s output,
and a performance comparison with alternative methods.</p>
<h1 id="abstract">Abstract</h1>
<p>Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce <code class="language-plaintext highlighter-rouge">pyJac</code>, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, <code class="language-plaintext highlighter-rouge">pyJac</code> uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13–360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using <code class="language-plaintext highlighter-rouge">pyJac</code> via matrix evaluation timing comparisons; the routines produced by <code class="language-plaintext highlighter-rouge">pyJac</code> outperformed first-order finite differences by 2.9–5.3 times and the existing analytical Jacobian software <code class="language-plaintext highlighter-rouge">TChem</code> by 3.9–41 times. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.</p>
</content>
</entry>
<entry>
<title>Reduced chemistry for butanol isomers at engine-relevant conditions</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/butanol-skeletal-models"/>
<updated>2016-12-12T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/butanol-skeletal-models</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Butanol has received significant research attention as a second-generation biofuel in the past few years. In the present study, skeletal mechanisms for four butanol isomers were generated from two widely accepted, well-validated detailed chemical kinetic models for the butanol isomers. The detailed models were reduced using a two-stage approach consisting of the directed relation graph with error propagation and sensitivity analysis. During the reduction process, issues were encountered with pressure-dependent reactions formulated using the logarithmic pressure interpolation approach; these issues are discussed and recommendations are made to avoid ambiguity in its future implementation in mechanism development. The performance of the skeletal mechanisms generated here was compared with that of detailed mechanisms in simulations of autoignition delay times, laminar flame speeds, and perfectly stirred reactor temperature response curves and extinction residence times, over a wide range of pressures, temperatures, and equivalence ratios. Good agreement was observed between the detailed and skeletal mechanisms, demonstrating the adequacy of the resulting reduced chemistry for all the butanol isomers in predicting global combustion phenomena. In addition, the skeletal mechanisms closely predicted the time-histories of fuel mass fractions in homogeneous compression-ignition engine simulations. The performance of each butanol isomer was additionally compared with that of a gasoline surrogate with an antiknock index of 87 in a homogeneous compression-ignition engine simulation. The gasoline surrogate was consumed faster than any of the butanol isomers, with <em>tert</em>-butanol exhibiting the slowest fuel consumption rate. While <em>n</em>-butanol and isobutanol displayed the most similar consumption profiles relative to the gasoline surrogate, the two literature chemical kinetic models predicted different orderings.</p>
</content>
</entry>
<entry>
<title>Kyle featured in faculty spotlight article</title>
<link href="https://niemeyer-research-group.github.io//news/kyle-faculty-spotlight"/>
<updated>2016-11-23T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//news/kyle-faculty-spotlight</id>
<content type="html"><p><a href="/team/kyle-niemeyer">Kyle</a> was featured in a <a href="http://blogs.oregonstate.edu/mimenews/2016/11/23/faculty-spotlight-kyle-niemeyer/">faculty spotlight article</a> describing his research and background.</p>
</content>
</entry>
<entry>
<title>Predicting fuel research octane number using Fourier-transform infrared absorption spectra of neat hydrocarbons</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/RON-FTIR"/>
<updated>2016-11-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/RON-FTIR</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Liquid transportation fuels require costly and time-consuming tests to characterize metrics, such as Research Octane Number (RON) for gasoline. If fuel sale restrictions requiring use of standard Cooperative Fuel Research (CFR) testing procedures do not apply, these tests may be avoided by using multivariate statistical models to predict RON and other quantities. Existing techniques inform these models using information about existing, similar fuels—for example, training a model for gasoline RON with a large number of characterized gasoline samples. While this yields the most accurate predictive models for these fuels, this approach lacks the ability to predict characteristics of fuels outside the training data set. Here we show that an accurate statistical model for the RON of gasoline and gasoline-like fuels can be constructed by ensuring the representation of key functional groups in the spectroscopic data set are used to train the model. We found that a principal component regression model for RON based on IR absorbance and informed using neat and 134 mixtures of n-heptane, isooctane, toluene, ethanol, methylcyclohexane, and 1-hexene could predict RON for the 10 Coordinating Research Council (CRC) Fuels for Advanced Combustion Engine (FACE) gasolines and 12 FACE gasoline blends with ethanol within 34.8±36.1 on average and 51.2 in the worst case. We next studied the effect of adding 28 additional minor components found in the FACE gasolines to the statistical model, and determined that it was necessary to add additional representatives of the branched alkane and aromatics classes to reduce model error. For example, adding 2,3-dimethylpentane and xylene to the previous model allowed it to predict RON for the 22 target fuels within 0.3±4.4 on average and 7.9 in the worst case. However, we determined that the specific choice of fuel in those classes mattered less than ensuring the representation of the relevant functional group. This work builds upon previous efforts by creating models informed by neat and surrogate fuels—rather than complex real fuels—that could predict the performance of complex unknown fuels.</p>
</content>
</entry>
<entry>
<title>Report on the Third Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE3)</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/wssspe3-report"/>
<updated>2016-10-21T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/wssspe3-report</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>This report records and discusses the Third Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE3). The report includes a description of the keynote presentation of the workshop, which served as an overview of sustainable scientific software. It also summarizes a set of lightning talks in which speakers highlighted to-the-point lessons and challenges pertaining to sustaining scientific software. The final and main contribution of the report is a summary of the discussions, future steps, and future organization for a set of self-organized working groups on topics including developing pathways to funding scientific software; constructing useful common metrics for crediting software stakeholders; identifying principles for sustainable software engineering design; reaching out to research software organizations around the world; and building communities for software sustainability. For each group, we include a point of contact and a landing page that can be used by those who want to join that group’s future activities. The main challenge left by the workshop is to see if the groups will execute these activities that they have scheduled, and how the WSSSPE community can encourage this to happen.</p>
</content>
</entry>
<entry>
<title>The challenge and promise of software citation for credit, identification, discovery, and reuse</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/software-citation-challenge"/>
<updated>2016-10-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/software-citation-challenge</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>In this article, we present the challenge of software citation as a method to ensure credit for and identification, discovery, and reuse of software in scientific and engineering research. We discuss related work and key challenges/research directions, including suggestions for metadata necessary for software citation.</p>
</content>
</entry>
<entry>
<title>Software citation principles</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/software-citation-principles"/>
<updated>2016-09-19T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/software-citation-principles</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Software is a critical part of modern research and yet there is little support across the scholarly ecosystem for its acknowledgement and citation. Inspired by the activities of the FORCE11 working group focused on data citation, this document summarizes the recommendations of the FORCE11 Software Citation Working Group and its activities between June 2015 and April 2016. Based on a review of existing community practices, the goal of the working group was to produce a consolidated set of citation principles that may encourage broad adoption of a consistent policy for software citation across disciplines and venues. Our work is presented here as a set of software citation principles, a discussion of the motivations for developing the principles, reviews of existing community practice, and a discussion of the requirements these principles would place upon different stakeholders. Working examples and possible technical solutions for how these principles can be implemented will be discussed in a separate paper.</p>
</content>
</entry>
<entry>
<title>Counterflow ignition of <i>n</i>-butanol at atmospheric and elevated pressures</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/nbutanol-counterflow"/>
<updated>2015-10-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/nbutanol-counterflow</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Critical to the development of predictive combustion models is a robust understanding of the coupled effects of chemical kinetics and convective–diffusive transport at both atmospheric and elevated pressures. The present study describes a new variable-pressure non-premixed counterflow ignition experiment designed to address the need for well-characterized reference data to validate such models under conditions sensitive to both chemical and transport processes. A comprehensive characterization of system behavior is provided to demonstrate boundary condition and ignition quality as well as adherence to the assumption of quasi-one-dimensionality, and suggest limitations and best practices for counterflow ignition experiments. This effort reveals that the counterflow ignition experiment requires special attention to ignition location in order to ensure that the assumption of quasi-one-dimensionality is valid, particularly at elevated pressures. This experimental tool is then applied to the investigation of <em>n</em>-butanol for pressures of 1–4 atm, pressure-weighted strain rates of 200–400 s<sup>−1</sup>, and fuel mole fractions of 0.05–0.25. Results are simulated using two <em>n</em>-butanol models available in the literature and used to validate and assess model performance. Comparison of experimental and numerical ignition results for <em>n</em>-butanol demonstrates that while existing models largely capture the trends observed with varying pressure, strain rate, and fuel loading, the models universally over-predict experimental ignition temperatures. While several transport coefficients are found to exhibit order-of-magnitude or greater sensitivities relative to reaction rates, variation of transport parameters is not able to account for the large deviations observed between experimental and numerical results. Further comparison of ignition kernel structure and fuel breakdown pathways between two literature models suggests that an under-prediction in the radical pool growth with respect to temperature variation may be responsible for both the deviation from the experimental results and the discrepancy in ignition temperature results observed between models.</p>
</content>
</entry>
<entry>
<title>Development of efficient and accurate skeletal mechanisms for hydrocarbon fuels and kerosene surrogate</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/kerosene-model-reduction"/>
<updated>2015-10-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/kerosene-model-reduction</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>In this paper, the methodology of the directed relation graph with error propagation and sensitivity analysis (DRGEPSA), proposed by Niemeyer et al. [Combustion and Flame 155 (2010) 1760–1770], and its differences to the original directed relation graph method are described. Using DRGEPSA, the detailed mechanism of ethylene containing 71 species and 395 reaction steps is reduced to several skeletal mechanisms with different error thresholds. The 25 species and 131 steps mechanism and the 24 species and 115 steps mechanism are found to be accurate for the predictions of ignition delay time and laminar flame speed. Although further reduction leads to a smaller skeletal mechanism with 19 species and 68 steps, it is no longer able to represent the correct reaction processes. With the DRGEPSA method, a detailed mechanism for n-dodecane considering low-temperature chemistry and containing 2115 species and 8157 steps is reduced to a much smaller mechanism with 249 species and 910 steps while retaining good accuracy. If considering only high-temperature (higher than 1000 K) applications, the detailed mechanism can be simplified to even smaller mechanisms with 65 species and 340 steps or 48 species and 220 steps. Furthermore, a detailed mechanism for a kerosene surrogate having 207 species and 1592 steps is reduced with various error thresholds and the results show that the 72 species and 429 steps mechanism and the 66 species and 392 steps mechanism are capable of predicting correct combustion properties compared to those of the detailed mechanism. It is well recognized that kinetic mechanisms can be effectively used in computations only after they are reduced to an acceptable size level for computation capacity and at the same time retaining accuracy. Thus, the skeletal mechanisms generated from the present work are expected to be useful for the application of kinetic mechanisms of hydrocarbons to numerical simulations of turbulent or supersonic combustion.</p>
</content>
</entry>
<entry>
<title>A novel fuel performance index for LTC engines based on operating envelopes in light-duty driving cycle simulations</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/LTC-index"/>
<updated>2015-10-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/LTC-index</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Low-temperature combustion (LTC) engine concepts such as homogeneous charge compression ignition (HCCI) offer the potential of improved efficiency and reduced emissions of NOx and particulates. However, engines can only successfully operate in HCCI mode for limited operating ranges that vary depending on the fuel composition. Unfortunately, traditional ratings such as octane number poorly predict the autoignition behavior of fuels in such engine modes, and metrics recently proposed for HCCI engines have areas of improvement when wide ranges of fuels are considered. In this study, a new index for ranking fuel suitability for LTC engines was defined, based on the fraction of potential fuel savings achieved in the FTP-75 light-duty vehicle driving cycle. Driving cycle simulations were performed using a typical light-duty passenger vehicle, providing pairs of engine speed and load points. Separately, single-zone naturally aspirated HCCI engine simulations were performed for a variety of fuels in order to determine the operating envelopes for each. These results were combined to determine the varying improvement in fuel economy offered by fuels, forming the basis for a fuel performance index. Results showed that, in general, lower octane fuels performed better, resulting in higher LTC fuel index values; however, octane number alone did not predict fuel performance.</p>
</content>
</entry>
<entry>
<title>Investigation of the LTC fuel performance index for oxygenated reference fuel blends</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/LTC-index-oxygenated"/>
<updated>2015-09-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/LTC-index-oxygenated</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>A new metric for ranking the suitability of fuels in LTC engines was recently introduced, based on the fraction of potential fuel savings achieved in the FTP-75 light-duty vehicle driving cycle. In the current study, this LTC fuel performance index was calculated and analyzed for a number of fuel blends comprised of n-heptane, isooctane, toluene, and ethanol in various combinations and ratios corresponding to octane numbers from 0 to 100. In order to calculate the LTC index for each fuel, driving cycle simulations were first performed using a typical light-duty passenger vehicle, providing pairs of engine speed and load points. Separately, for each fuel blend considered, single-zone naturally aspirated HCCI engine simulations with a compression ratio of 9.5 were performed in order to determine the operating envelopes. These results were combined to determine the varying improvement in fuel economy offered by fuels, forming the basis for the LTC fuel index. The resulting fuel performance indices ranged from 36.4 for neat n-heptane (PRF0) to 9.20 for a three-component blend of <em>n</em>-heptane, isooctane, and ethanol (ERF1). For the chosen engine and chosen conditions, in general lower-octane fuels performed better, resulting in higher LTC fuel index values; however, the fuel performance index correlated poorly with octane rating for less-reactive, higher-octane fuels.</p>
</content>
</entry>
<entry>
<title>An automated target species selection method for dynamic adaptive chemistry simulations</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/target-species-selection"/>
<updated>2015-04-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/target-species-selection</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>The relative importance index (RII) method for determining appropriate target species for dynamic adaptive chemistry (DAC) simulations using the directed relation graph with error propagation (DRGEP) method is developed. The adequacy and effectiveness of this RII method is validated for two fuels: n-heptane and isopentanol, representatives of a ground transportation fuel component and bio-alcohol, respectively.</p>
<p>The conventional method of DRGEP target species selection involves picking an unchanging (static) set of target species based on the combustion processes of interest; however, these static target species may not remain important throughout the entire combustion simulation, adversely affecting the accuracy of the method. In particular, this behavior may significantly reduce the accuracy of the DRGEP-based DAC approach in complex multidimensional simulations where the encountered combustion conditions cannot be known a priori with high certainty. Moreover, testing multiple sets of static target species to ensure the accuracy of the method is generally computationally prohibitive. Instead, the RII method determines appropriate DRGEP target species solely from the local thermo-chemical state of the simulation, ensuring that accuracy will be maintained. Further, the RII method reduces the expertise required of users to select DRGEP target species sets appropriate to the combustion phenomena under consideration.</p>
<p>Constant volume autoignition simulations run over a wide range of initial conditions using detailed reaction mechanisms for n-heptane and isopentanol show that the RII method is able to maintain accuracy even when traditional static target species sets fail, and are even more accurate than expert-selected target species sets. Additionally, the accuracy and efficiency of the RII method are compared to those of static target species sets in single-cell engine simulations under homogeneous charge compression ignition conditions. For simulations using more stringent DRGEP thresholds, the RII method performs similarly to that of the static target species sets. With a larger DRGEP threshold, the RII method is significantly more accurate than the static target species sets without imposing significant computational overhead.</p>
<p>Furthermore, the applicability of the RII method to a DRG-based DAC scheme is discussed.</p>
</content>
</entry>
<entry>
<title>Reduced chemistry for a gasoline surrogate valid at engine-relevant conditions</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/reduced-gasoline-surrogate"/>
<updated>2015-01-14T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/reduced-gasoline-surrogate</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>A detailed mechanism for the four-component RD387 gasoline surrogate developed by Lawrence Livermore National Laboratory has shown good agreement with experiments in engine-relevant conditions. However, with 1388 species and 5933 reversible reactions, this detailed mechanism is far too large to use in practical engine simulations. Therefore, reduction of the detailed mechanism was performed using a multi-stage approach consisting of the DRGEPSA method, unimportant reaction elimination, isomer lumping, and analytic QSS reduction based on CSP analysis. A new greedy sensitivity analysis algorithm was developed and demonstrated to be capable of removing more species for the same error limit compared to the conventional sensitivity analysis used in DRG-based skeletal reduction methods. Using this new greedy algorithm, several skeletal and reduced mechanisms were developed at varying levels of complexity and for different target condition ranges. The final skeletal and reduced mechanisms consisted of 213 and 148 species, respectively, for a lean-to-stoichiometric, low-temperature HCCI-like range of conditions. For a lean-to-rich, high-temperature, SI/CI-like range of conditions, skeletal and reduced mechanisms were developed with 97 and 79 species, respectively. The skeletal and reduced mechanisms in this study were produced using an error limit of 10% and validated using homogeneous autoignition simulations over engine-relevant conditions—all showed good agreement in predicting ignition delay. Furthermore, extended validation was performed, including comparison of autoignition temperature profiles, PSR temperature response curves and extinction turning points, and laminar flame speed calculations. All the extended validation showed results within the 10% error limit, demonstrating the adequacy of the resulting reduced chemistry.</p>
</content>
</entry>
<entry>
<title>Mechanism reduction for multicomponent surrogates: a case study using toluene reference fuels</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/multicomponent-reduction"/>
<updated>2014-11-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/multicomponent-reduction</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, such as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.</p>
</content>
</entry>
<entry>
<title>Recent progress and challenges in exploiting graphics processors in computational fluid dynamics</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/review-GPU-CFD"/>
<updated>2014-02-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/review-GPU-CFD</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>The progress made in accelerating simulations of fluid flow using GPUs, and the challenges that remain, are surveyed. The review first provides an introduction to GPU computing and programming, and discusses various considerations for improved performance. Case studies comparing the performance of CPU- and GPU- based solvers for the Laplace and incompressible Navier–Stokes equations are performed in order to demonstrate the potential improvement even with simple codes. Recent efforts to accelerate CFD simulations using GPUs are reviewed for laminar, turbulent, and reactive flow solvers. Also, GPU implementations of the lattice Boltzmann method are reviewed. Finally, recommendations for implementing CFD codes on GPUs are given and remaining challenges are discussed, such as the need to develop new strategies and redesign algorithms to enable GPU acceleration.</p>
</content>
</entry>
<entry>
<title>Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/moderately-stiff-GPU"/>
<updated>2014-01-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/moderately-stiff-GPU</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge–Kutta–Cash–Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge–Kutta–Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.</p>
</content>
</entry>
<entry>
<title>On the importance of graph search algorithms for DRGEP-based mechanism reduction methods</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/graph-search"/>
<updated>2011-08-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/graph-search</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra’s algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra’s algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra’s algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRGEP method with each search algorithm on the large detailed mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions. Dijkstra’s algorithm implemented with a binary heap priority queue is demonstrated as the most efficient method, with a CPU cost two orders of magnitude less than the other search algorithms.</p>
</content>
</entry>
<entry>
<title>Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/skeletal-reduction"/>
<updated>2010-09-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/skeletal-reduction</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high- temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article.</p>
</content>
</entry>
<entry>
<title>Three-dimensional surface texture visualization of bone tissue through epifluorescence-based serial block face imaging</title>
<link href="https://niemeyer-research-group.github.io//papers/paper/bone-imaging"/>
<updated>2009-10-01T00:00:00+00:00</updated>
<id>https://niemeyer-research-group.github.io//papers/paper/bone-imaging</id>
<content type="html">
<h1 id="abstract">Abstract</h1>
<p>Serial block face imaging is a microscopy technique in which the top of a specimen is cut or ground away and a mosaic of images is collected of the newly revealed cross-section. Images collected from each slice are then digitally stacked to achieve 3D images. The development of fully automated image acquisition devices has made serial block face imaging more attractive by greatly reducing labour requirements. The technique is particularly attractive for studies of biological activity within cancellous bone as it has the capability of achieving direct, automated measures of biological and morphological traits and their associations with one another. When used with fluorescence microscopy, serial block face imaging has the potential to achieve 3D images of tissue as well as fluorescent markers of biological activity. Epifluorescence-based serial block face imaging presents a number of unique challenges for visualizing bone specimens due to noise generated by sub-surface signal and local variations in tissue autofluorescence. Here we present techniques for processing serial block face images of trabecular bone using a combination of non-uniform illumination correction, precise tiling of the mosaic in each cross-section, cross-section alignment for vertical stacking, removal of sub-surface signal and segmentation. The resulting techniques allow examination of bone surface texture that will enable 3D quantitative measures of biological processes in cancellous bone biopsies.</p>
</content>
</entry>
</feed>