-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathfeed.rss
495 lines (453 loc) · 130 KB
/
feed.rss
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content"><channel><title>Laurent Brusa</title><description>iOS Developer</description><link>https://multitudes.github.io</link><language>en</language><lastBuildDate>Mon, 16 Mar 2020 12:40:46 +0100</lastBuildDate><pubDate>Mon, 16 Mar 2020 12:40:46 +0100</pubDate><ttl>250</ttl><atom:link href="https://multitudes.github.io/feed.rss" rel="self" type="application/rss+xml"/><item><guid isPermaLink="true">https://multitudes.github.io/posts/Covid19</guid><title>Covid-19, Ihre Gemeinschaft und Sie - eine datenwissenschaftliche Perspektive</title><description></description><link>https://multitudes.github.io/posts/Covid19</link><pubDate>Tue, 10 Mar 2020 09:38:00 +0100</pubDate><content:encoded><![CDATA[<p>Geschrieben: 09 Mar 2020 von Jeremy Howard und Rachel Thomas</p><style type="text/css">
blockquote {
color: #7a7a7a;
border-left: .25rem solid #e5e5e5;
margin: .8 rem 0px;
padding-right: 5rem;
padding-left: 1.25rem;
}
a {
color: #268bd2;
}
</style><blockquote><p>Wir sind Datenwissenschaftler - das heißt, es ist unsere Aufgabe, zu verstehen, wie man Daten analysiert und interpretiert. Wenn wir die Daten um Covid-19 analysieren, sind wir sehr besorgt. Die anfälligsten Schichten der Gesellschaft, die Älteren und die Armen, sind am stärksten gefährdet, aber die Kontrolle der Ausbreitung und der Auswirkungen der Krankheit erfordert von uns allen eine Verhaltensänderung. Waschen Sie Ihre Hände gründlich und regelmäßig, vermeiden Sie Gruppen und Menschenansammlungen, sagen Sie Veranstaltungen ab und berühren Sie nicht Ihr Gesicht. In diesem Beitrag erklären wir, warum wir besorgt sind, und Sie sollten es auch sein. Für eine sehr gute Zusammenfassung der wichtigsten Informationen, bitte lesen Sie <a href="https://docs.google.com/document/u/1/d/1vumYWoiV7NlVoc27rMQvmVkVu5cOAbnaW_RKkq2RMaQ/mobilebasic?fbclid=IwAR0If1zzDDldgAy3DZmFhaxAmP046-dwAE_LCj3l9su2XLYpZe2By8mCj1A">Corona in Brief</a> von Ethan Alley (Präsident einer gemeinnützigen Organisation, die Technologien zur Verringerung der Risiken von Pandemien entwickelt).</p></blockquote><h2>Übersetzungen</h2><p>Jeder ist willkommen, diesen Artikel zu übersetzen, um seinen lokalen Gemeinschaften zu helfen, diese Themen zu verstehen. Bitte verlinken Sie mit entsprechendem Vermerk auf diese Seite zurück. Lassen Sie es uns auf <a href="https://twitter.com/jeremyphoward">Twitter</a> wissen, damit wir Ihre Übersetzung in diese Liste aufnehmen können.</p><ul><li><a href="https://medium.com/@xrb/covid-19-votre-communauté-et-vous-3e5f127910bc">Französisch</a></li><li><a href="https://datus.encryptedcommerce.net/public-health/2020/03/09/Covid-19-datascience-translation.html">Spanisch</a></li><li><a href="https://medium.com/@gpalmape/covid-19-sua-comunidade-e-você-uma-perspectiva-de-ciência-de-dados-cbdded20e436">Portugiesisch (Brasilien)</a></li><li><a href="https://medium.com/mmaetha/covid-19-สังคม-และตัวคุณเอง-ในมุมมองของวิทยาศาสตร์ข้อมูล-1323c34fd4df">Thailändisch</a></li><li><a href="https://multitudes.github.io/posts/Covid19/">Deutsch</a></li><li><a href="https://medium.com/@ricangius/covid-19-la-tua-comunit%C3%A0-e-te-una-prospettiva-dalla-data-science-196845fc9275">Italian</a></li><li><a href="https://www.nimirea.com/blog/2020/03/11/covid19-comunitatea-voastra-si-voi/">Romanian</a></li><li><a href="https://medium.com/@maciekwilczynski/koronawirus-twoja-spo%C5%82eczno%C5%9B%C4%87-i-ty-perspektywa-danych-8c10122db5bd">Polish</a></li><li><a href="https://www.twofyw.me/update/2020/02/10/Covid-19,-your-community,-and-you.html">Chinese</a></li><li><a href="https://docs.google.com/document/d/1ze9wLkoUDrJV3le3PKB64q1sOgtNU6KtynBywTBj2os/edit">Arabic</a></li><li><a href="https://fintalklabs.com/covid-19-corona-virus/">Marathi</a></li><li><a href="https://drive.google.com/file/d/1D7PEwRXM4KUyOze9_2D0JxwJdDWi8Ihu/view">Kannada</a></li><li><a href="https://asvcode.github.io/covid19-translations/markdown/2020/03/10/Swahili.html">Swahili</a></li><li><a href="https://qiita.com/yutakanzawa/items/89f313f4f26a3abc398e">Japanese</a><ul></ul></li></ul><h2>Inhalt</h2><ul><li><a href="#system">Wir brauchen ein funktionierendes medizinisches System</a></li><li><a href="#grippe">Dies ist nicht wie die Grippe</a></li><li><a href="#panik">"Keine Panik. Ruhe bewahren" ist nicht hilfreich</a></li><li><a href="#sie">Es geht nicht nur um Sie</a></li><li><a href="#kurve">Wir müssen die Kurve abflachen</a></li><li><a href="#react">Die Reaktion einer Gemeinschaft macht den Unterschied</a></li><li><a href="#usa">Wir haben keine guten Informationen in den USA</a></li><li><a href="#end">Zum Schluss</a></li></ul><h2 id='system'>Wir brauchen ein funktionierendes medizinisches System</h2><p>Vor etwas mehr als 2 Jahren bekam eine von uns (Rachel) eine Hirninfektion, an der etwa 1/4 der Betroffenen stirbt und 1/3 mit einer dauerhaften kognitiven Beeinträchtigung zurückbleibt. Viele andere enden mit dauerhaften Seh- und Hörschäden. Rachel war im Delirium, als sie über den Krankenhausparkplatz kroch. Sie hatte das Glück, eine schnelle Behandlung, Diagnose und Betreuung zu erhalten. Bis kurz vor diesem Ereignis war Rachel bei bester Gesundheit. Der sofortige Zugang zur Notaufnahme hat ihr mit ziemlicher Sicherheit das Leben gerettet.</p><p>Lassen Sie uns nun über Covid-19 sprechen und darüber, was in den kommenden Wochen und Monaten mit den Leuten in Rachels Situation geschehen könnte. Die Zahl der Menschen, die mit Covid-19 infiziert sind, verdoppelt sich alle 3 bis 6 Tage. Bei einer Verdoppelungsrate von drei Tagen bedeutet das, dass die Zahl der Menschen, die sich infiziert haben, innerhalb von drei Wochen um das 100-fache ansteigen kann (das ist eigentlich nicht ganz so einfach, aber lassen wir uns nicht von technischen Details ablenken). Eine von 10 infizierten Personen muss für viele Wochen ins Krankenhaus eingeliefert werden, und die meisten benötigen Sauerstoff. Obwohl es für dieses Virus noch sehr früh ist, gibt es bereits Regionen, in denen die Krankenhäuser völlig überfüllt sind und die Menschen nicht mehr in der Lage sind, die erforderliche Behandlung zu erhalten (nicht nur für Covid-19, sondern auch für alles andere, wie z.B. die lebensrettende Behandlung, die Rachel brauchte). In Italien zum Beispiel, wo noch vor einer Woche Beamte sagten, dass alles in Ordnung sei, sind jetzt sechzehn Millionen Menschen isoliert worden (<em>Update: 6 Stunden nach der Veröffentlichung des Artikels hat Italien das ganze Land isoliert</em>), und es werden Zelte wie dieses aufgebaut, um den Zustrom von Patienten zu bewältigen:</p><p align="center">
<img src="https://www.fast.ai/images/coronavirus/image1.jpeg" class="pure-img-responsive" title="Ein medizinisches Zelt in Italien"></img><br>
<caption>Ein medizinisches Zelt in Italien</caption>
</p><p>Dr. Antonio Pesenti, Leiter des regionalen Krisenreaktionskommandos in einem schwer betroffenen Gebiet Italiens, <a href="https://www.reuters.com/article/us-health-coronavirus-italy/alarmed-italy-locks-down-north-to-prevent-spread-of-coronavirus-idUSKBN20V06R">sagte</a>: "Wir sind jetzt gezwungen, eine Intensivbehandlung in Korridoren, in Operationssälen, in Erholungsräumen einzurichten... Eines der besten Gesundheitssysteme der Welt, in der Lombardei, ist einen Schritt vom Zusammenbruch entfernt".</p><h2 id='grippe'>Dies ist nicht wie die Grippe</h2><p>Die Grippe hat eine Todesrate von etwa 0,1% der Infektionen. Marc Lipsitch, der Direktor des "Center for Communicable Disease Dynamics" in Harvard, <a href="https://www.washingtonpost.com/opinions/2020/03/06/why-its-so-hard-pin-down-risk-dying-coronavirus/">schätzt</a>, dass sie bei Covid-19 bei 1-2% liegt. Die jüngste <a href="https://www.medrxiv.org/content/10.1101/2020.03.04.20031104v1.full.pdf">epidemiologische Modellierung</a> ergab im Februar eine Rate von 1,6% in China, sechzehnmal höher als bei der Grippe<sup id="fnref:epid"><a class="footnote" href="#fn:epid">1</a></sup> (dies könnte jedoch eine recht konservative Zahl sein, da die Raten stark ansteigen, wenn das medizinische System nicht mehr ausreicht). Die derzeit besten Schätzungen gehen davon aus, dass Covid-19 in diesem Jahr zehnmal mehr Menschen töten wird als die Grippe (und <a href="https://docs.google.com/spreadsheets/d/1ktSfdDrX_uJsdM08azBflVOm4Z5ZVE75nA0lGygNgaA/edit?usp=sharing">die Modellierung von Elena Grewal</a>, der ehemaligen Direktorin für Datenwissenschaft bei Airbnb, zeigt, dass es im schlimmsten Fall 100 Mal mehr sein könnten). Dies ist vor Berücksichtigung der enormen Auswirkungen auf das medizinische System, wie sie oben beschrieben wurden. Es ist verständlich, dass einige Leute versuchen, sich davon zu überzeugen, dass eine Krankheit ähnlich der Grippe nichts Neues ist, denn es ist sehr unangenehm zu akzeptieren, dass wir über diese Krankheit zu wenig wissen.</p><p>Der Versuch, intuitiv zu verstehen, dass die Zahl der Infizierten exponentiell zunimmt, ist nicht etwas, für das unser Gehirn ausgelegt ist. Wir müssen also die Daten als Wissenschaftler analysieren und nicht nur unserer Intuition vertrauen.</p><p align="center">
<img src="https://www.fast.ai/images/coronavirus/image2.png" class="pure-img-responsive" title="Total cases outside of China"></img><br>
<caption>Wie wird dies in 2 Wochen aussehen? In 2 Monaten?</caption>
</p><p>Für jede Person, die die Grippe hat, werden im Durchschnitt 1,3 andere Menschen infiziert. Das nennt man das "R0" für Grippe. Wenn R0 weniger als 1,0 beträgt, hört eine Infektion auf, sich auszubreiten und stirbt aus. Wenn sie über 1,0 liegt, breitet sie sich aus. R0 liegt derzeit bei 2-3 für Covid-19 außerhalb Chinas. Der Unterschied mag zwar klein klingen, aber nach 20 "Generationen" von Infizierten, die ihre Infektion weitergeben, würde ein R0 von 1,3 zu 146 Infektionen führen, aber ein R0 von 2,5 würde 36 Millionen Infektionen zur Folge haben! (Dies ist natürlich sehr schwankend) und ignoriert viele Auswirkungen in der realen Welt, aber es ist ein vernünftiges Beispiel für den relativen Unterschied zwischen Covid-19 und Grippe, wobei alle anderen Dinge gleich sind.</p><p>Beachten Sie, dass R0 nicht irgendeine grundlegende Eigenschaft einer Krankheit ist. Sie hängt stark von der Reaktion ab und kann sich im Laufe der Zeit ändern<sup id="fnref:r0"><a class="footnote" href="#fn:r0">2</a></sup>. Vor allem in China ist R0 für Covid-19 stark zurückgegangen und nähert sich jetzt dem Wert 1,0! Wie, fragen Sie? Indem man Maßnahmen in einem Ausmaß ergreift, das in einem Land wie den USA schwer vorstellbar wäre - zum Beispiel durch die vollständige Absperrung vieler Riesenstädte und die Entwicklung eines Testverfahrens, das es ermöglicht, mehr als eine Million Menschen pro Woche zu testen.</p><p>Eine Sache, die in den sozialen Netzwerken (auch von stark verfolgten Accounts wie Elon Musk) häufig vorkommt, ist ein Missverständnis des Unterschieds zwischen logistischem und exponentiellem Wachstum. Das "logistische" Wachstum bezieht sich auf das "s-förmige" Wachstumsmuster der sich in der Praxis ausbreitenden Epidemie. Offensichtlich kann ein exponentielles Wachstum nicht ewig anhalten, da sonst mehr Menschen infiziert würden als in der Welt! Daher müssen die Infektionsraten letztendlich immer abnehmen, was zu einer s-förmigen (als sigmoid bezeichneten) Wachstumsrate im Laufe der Zeit führt. Das abnehmende Wachstum tritt jedoch nur aus einem Grund auf - es ist keine Zauberei. Die Hauptgründe sind:</p><ul><li>Massive und effektive gemeinschaftliche Reaktion, oder</li><li>Ein so großer Prozentsatz der Menschen ist infiziert, dass es weniger nicht infizierte Menschen gibt, auf die man sich ausbreiten kann.</li></ul><p>Daher macht es keinen logischen Sinn, sich auf das logistische Wachstumsmuster zu verlassen, um eine Pandemie "unter Kontrolle" zu bringen.</p><p>Eine weitere Sache, die es schwierig macht, die Auswirkungen von Covid-19 in Ihrer lokalen Gemeinschaft intuitiv zu verstehen, ist die Tatsache, dass zwischen der Infektion und dem Krankenhauseinweisung eine sehr bedeutende Verzögerung liegt - im Allgemeinen etwa 11 Tage. Das mag nicht lange erscheinen, aber wenn man es mit der Anzahl der in dieser Zeit infizierten Menschen vergleicht, bedeutet es, dass die Infektion in der Gemeinschaft bereits so weit fortgeschritten ist, dass es 5-10 Mal mehr Menschen gibt, die behandelt werden müssen, wenn man merkt, dass die Krankenhäuser voll sind.</p><p>Beachten Sie, dass es einige frühe Anzeichen dafür gibt, dass die Auswirkungen in Ihrer Region zumindest etwas vom Klima abhängen könnten. Das Papier <a href="https://poseidon01.ssrn.com/delivery.php?ID=091071099092098096101097074089104068104013035023062021010031112088025099126064001093097030102106046016114116082016095089113023126034089078012119081090111118122007110026000085123071022022127025026080005029001020025126022000066075021086079031101116126112&EXT=pdf">Temperatur- und Breitenanalyse zur Vorhersage der potenziellen Ausbreitung und Saisonalität von COVID-19</a> weist darauf hin, dass sich die Krankheit bisher in milden Klimazonen ausgebreitet hat (leider liegt der Temperaturbereich in San Francisco, wo wir leben, genau in diesem Bereich; er umfasst auch die wichtigsten Bevölkerungszentren Europas, einschließlich London).</p><h2 id='panik'>"Keine Panik. Ruhe bewahren" ist nicht hilfreich.</h2><p>Eine häufige Reaktion, die wir in den sozialen Medien auf Menschen gesehen haben, die auf die Gründe für ihre Besorgnis hinweisen, ist "Keine Panik" oder "Ruhe bewahren". Dies ist, um es milde auszudrücken, nicht hilfreich. Niemand behauptet, dass Panik eine angemessene Reaktion sei. Aus irgendeinem Grund ist "Ruhe bewahren" jedoch eine sehr beliebte Reaktion in bestimmten Kreisen (aber nicht bei den Epidemiologen, deren Aufgabe es ist, diese Dinge zu verfolgen). Vielleicht hilft "Ruhe bewahren" einigen Menschen, sich besser über ihre eigene Untätigkeit zu fühlen, oder sie fühlen sich irgendwie überlegen gegenüber Menschen, von denen sie glauben, dass sie wie ein kopfloses Huhn herumlaufen.</p><p>Aber "Ruhe bewahren" kann leicht zu einem Versagen bei der Vorbereitung und Reaktion führen. In China wurden Zehnmillionen Menschen eingesperrt und zwei neue Krankenhäuser gebaut, bis sie die Statistiken erreicht hatten, die die USA jetzt haben. Italien hat zu lange gewartet, und gerade heute (Sonntag, 8. März) wurden 1492 neue Fälle und 133 neue Todesfälle gemeldet, obwohl 16 Millionen Menschen eingeschlossen wurden. Auf der Grundlage der besten Informationen, die wir zum jetzigen Zeitpunkt ermitteln können, befand sich Italien noch vor 2-3 Wochen in der gleichen Lage wie die USA und Großbritannien heute (in Bezug auf die Infektionsstatistik).</p><p>Beachten Sie, dass zu diesem Zeitpunkt fast alles über Covid-19 in der Luft liegt. Wir wissen nicht wirklich, wie schnell die Infektion verläuft oder wie hoch die Sterblichkeit ist, wir wissen nicht, wie lange es an der Oberfläche aktiv bleibt, wir wissen nicht, ob es unter warmen Bedingungen überlebt und sich ausbreitet. Alles, was wir haben, sind derzeit beste Vermutungen auf der Grundlage der besten Informationen, die die Menschen zusammenstellen können. Und denken Sie daran, dass die überwiegende Mehrheit dieser Informationen in China liegt, in chinesischer Sprache. Der beste Weg, die bisherigen Erfahrungen in China zu verstehen, ist derzeit die Lektüre des ausgezeichneten <a href="https://www.who.int/docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.pdf">Report of the WHO-China Joint Mission on Coronavirus Disease 2019</a>, der auf einer gemeinsamen Mission von 25 nationalen und internationalen Experten aus China, Deutschland, Japan, Korea, Nigeria, Russland, Singapur, den Vereinigten Staaten von Amerika und der Weltgesundheitsorganisation (WHO) beruht.</p><p>Wenn eine gewisse Unsicherheit besteht, dass es sich vielleicht nicht um eine globale Pandemie handelt und vielleicht alles vorbeigehen könnte, ohne dass das Krankenhauswesen zusammenbricht, bedeutet das nicht, dass die richtige Reaktion darin besteht, nichts zu tun. Das wäre enorm spekulativ und keine optimale Reaktion unter jedem Bedrohungsszenario. Es scheint auch äußerst unwahrscheinlich, dass Länder wie Italien und China große Teile ihrer Wirtschaft ohne guten Grund effektiv stilllegen würden. Es stimmt auch nicht mit den tatsächlichen Auswirkungen überein, die wir vor Ort in infizierten Gebieten sehen, in denen das medizinische System nicht in der Lage ist, damit umzugehen (Italien beispielsweise verwendet 462 Zelte für die "Pre-triage" und muss immer noch Patienten auf der Intensivstation <a href="https://www.repubblica.it/cronaca/2020/03/08/news/coronavirus_situazione_italia-250665818/?ref=RHPPTP-BH-I250661466-C12-P5-S1.12-T1">aus infizierten Gebieten umziehen</a>).</p><p>Stattdessen besteht die durchdachte, vernünftige Reaktion darin, die von Experten empfohlenen Schritte zu befolgen, um die Verbreitung von Infektionen zu vermeiden:</p><ul><li>Vermeiden Sie große Gruppen und Menschenmengen.</li><li>Ereignisse abbrechen</li><li>Arbeit von zu Hause aus, wenn überhaupt möglich</li><li>Waschen Sie Ihre Hände beim Kommen und Gehen von zu Hause und häufig bei der Arbeit.</li><li>Vermeiden Sie es, Ihr Gesicht zu berühren, besonders wenn Sie sich außerhalb Ihres Hauses befinden (nicht einfach!).</li><li>Desinfizieren Sie Oberflächen und Verpackungen (es ist möglich, dass das Virus 9 Tage lang auf Oberflächen aktiv bleibt, obwohl dies noch nicht sicher bekannt ist).</li></ul><h2 id='sie'>Es geht nicht nur um Sie</h2><p>Wenn Sie unter 50 Jahre alt sind und keine Risikofaktoren wie ein geschwächtes Immunsystem, eine Herz-Kreislauf-Erkrankung, eine Vorgeschichte mit Rauchen in der Vergangenheit oder andere chronische Krankheiten haben, können Sie sich damit trösten, dass Covid-19 Sie wahrscheinlich nicht töten wird. Aber wie Sie darauf reagieren, ist immer noch sehr wichtig. Die Wahrscheinlichkeit, dass Sie sich infizieren, ist immer noch genauso groß, und wenn Sie es tun, ist die Wahrscheinlichkeit, dass Sie andere anstecken, genauso groß. Im Durchschnitt infiziert jede infizierte Person mehr als zwei weitere Personen, und sie werden infektiös, bevor sie Symptome zeigen. Wenn Sie Eltern haben, die Ihnen wichtig sind, oder Großeltern, und planen, Zeit mit ihnen zu verbringen, und später feststellen, dass Sie für die Ansteckung mit Covid-19 verantwortlich sind, wäre das eine schwere Belastung, mit der Sie leben müssten.</p><p>Selbst wenn Sie keinen Kontakt zu Menschen über 50 haben, ist es wahrscheinlich, dass Sie mehr Mitarbeiter und Bekannte mit chronischen Krankheiten haben, als Ihnen bewusst ist. <a href="https://www.talentinnovation.org/_private/assets/DisabilitiesInclusion_KeyFindings-CTI.pdf">Forschungsergebnisse</a> zeigen, dass nur wenige Menschen ihre gesundheitlichen Probleme am Arbeitsplatz offenlegen, wenn sie es <a href="https://medium.com/@racheltho/the-tech-industry-is-failing-people-with-disabilities-and-chronic-illnesses-8e8aa17937f3">aus Angst vor Diskriminierung</a> vermeiden können. Wir beide gehören zu den Hochrisikokategorien, aber viele Menschen, mit denen wir regelmäßig zu tun haben, haben dies vielleicht nicht gewusst.</p><p>Und natürlich geht es nicht nur um die Menschen in Ihrer unmittelbaren Umgebung. Dies ist eine höchst wichtige ethische Frage. Jede Person, die ihr Bestes tut, um zur Kontrolle der Verbreitung des Virus beizutragen, hilft ihrer gesamten Gemeinschaft, die Infektionsrate zu verlangsamen. Wie Zeynep Tufekci<a href="https://blogs.scientificamerican.com/observations/preparing-for-coronavirus-to-strike-the-u-s/"> in Scientific American</a> schrieb: "Die Vorbereitung auf die fast unvermeidliche weltweite Verbreitung dieses Virus ... ist eine der pro-sozialen, altruistischen Dinge, die man tun kann". Sie fährt fort:</p><blockquote><p>Wir sollten uns vorbereiten, nicht weil wir uns persönlich gefährdet fühlen könnten, sondern damit wir dazu beitragen können, das Risiko für alle zu verringern. Wir sollten uns nicht vorbereiten, weil wir vor einem unkontrollierbaren Weltuntergangsszenario stehen, sondern weil wir jeden Aspekt dieses Risikos, dem wir als Gesellschaft ausgesetzt sind, verändern können. Richtig, Sie sollten sich vorbereiten, weil Ihre Nachbarn Sie brauchen, um sich vorzubereiten - vor allem Ihre älteren Nachbarn, Ihre Nachbarn, die in Krankenhäusern arbeiten, Ihre Nachbarn mit chronischen Krankheiten und Ihre Nachbarn, die vielleicht nicht die Mittel oder die Zeit haben, sich vorzubereiten, weil ihnen die Mittel oder die Zeit fehlen.</p></blockquote><p>Dies hat uns persönlich beeinflusst. Der größte und wichtigste Kurs, den wir jemals bei fast.ai eingerichtet haben und der für uns den Höhepunkt jahrelanger Arbeit darstellt, sollte in einer Woche an der Universität von San Francisco beginnen. Letzten Mittwoch (4. März) haben wir die Entscheidung getroffen, das Ganze online zu verlegen. Wir waren einer der ersten großen Kurse, die online gingen. Warum haben wir das getan? Weil uns Anfang letzter Woche klar wurde, dass wir, wenn wir diesen Kurs durchführen, implizit Hunderte von Menschen dazu ermutigen würden, sich in einem geschlossenen Raum zu treffen, und zwar mehrere Male über einen mehrwöchigen Zeitraum hinweg. Gruppen in geschlossenen Räumen zusammenzubringen, ist das Schlimmste, was man tun kann. Wir fühlten uns ethisch verpflichtet, dafür zu sorgen, dass dies zumindest in diesem Fall nicht passiert. Es war eine herzzerreißende Entscheidung. Die Zeit, die wir direkt mit unseren Studenten verbrachten, war eine der größten Freuden und produktivsten Perioden jedes Jahres. Und wir hatten Studenten, die aus der ganzen Welt einfliegen wollten, die wir wirklich nicht im Stich lassen wollten<sup id="fnref:r0"><a class="footnote" href="#fn:letdown">3</a></sup>.</p><p>Aber wir wussten, dass es das Richtige war, denn sonst würden wir wahrscheinlich die Ausbreitung der Krankheit in unserer Gemeinde verstärken<sup id="fnref:r0"><a class="footnote" href="#fn:changes">4</a></sup>.</p><h2 id='kurve'>Wir müssen die Kurve abflachen</h2><p>Dies ist äußerst wichtig, denn wenn wir die Infektionsrate in einer Gemeinde verlangsamen können, dann geben wir den Krankenhäusern in dieser Gemeinde Zeit, sich sowohl um die infizierten Patienten als auch um die regelmäßige Patientenlast zu kümmern, die sie bewältigen müssen. Dies wird als "Abflachung der Kurve" beschrieben und in diesem illustrativen Schaubild deutlich dargestellt:</p><p align="center">
<img src="https://www.fast.ai/images/coronavirus/image3.jpeg" class="pure-img-responsive" title="Bleiben unter dieser gepunkteten Linie bedeutet alles"></img><br>
<caption>Wir müssen unter dieser gepunkteten Linie bleiben</caption>
</p><p>Farzad Mostashari, der ehemalige nationale Koordinator für Gesundheits-IT, erklärte dies: "Jeden Tag werden neue Fälle identifiziert, die keine Reisegeschichte oder Verbindung zu einem bekannten Fall haben, und wir wissen, dass diese wegen der Verzögerungen bei den Tests nur die Spitze des Eisbergs sind. Das bedeutet, dass in den nächsten zwei Wochen die Zahl der diagnostizierten Fälle explodieren wird... Der Versuch, eine Eindämmung zu erreichen, wenn sich die Bevölkerung exponentiell ausbreitet, ist so, als würde man sich darauf konzentrieren, die Funken zu löschen, wenn das Haus brennt. Wenn das passiert, müssen wir unsere Strategien auf Schutzmaßnahmen umstellen, um die Ausbreitung zu verlangsamen und die Auswirkungen auf die Gesundheitsversorgung zu reduzieren. Wenn wir die Ausbreitung der Krankheit so gering halten können, dass unsere Krankenhäuser die Belastung bewältigen können, dann können die Menschen Zugang zu einer Behandlung erhalten. Aber wenn die Fälle zu schnell kommen, dann werden diejenigen, die einen Krankenhausaufenthalt benötigen, diesen nicht bekommen.</p><p>So könnte die Kalkulation aussehen, so <a href="https://twitter.com/LizSpecht/status/1236095186737852416">Liz Specht</a>:</p><blockquote><p>In den USA gibt es etwa 2,8 Krankenhausbetten pro 1000 Menschen. Bei einer Bevölkerung von 330 Mio. Einwohnern sind dies ~1 Mio. Betten. Zu einem bestimmten Zeitpunkt sind bereits 65 % dieser Betten belegt. Damit bleiben landesweit etwa 330.000 Betten verfügbar (vielleicht etwas weniger zu dieser Jahreszeit mit regelmäßiger Grippesaison usw.). Vertrauen wir den Zahlen Italiens und gehen wir davon aus, dass etwa 10% der Fälle schwerwiegend genug sind, um einen Krankenhausaufenthalt zu erfordern. (Denken Sie daran, dass bei vielen Patienten der Krankenhausaufenthalt wochenlang dauert - mit anderen Worten, der Umsatz wird sehr langsam sein, da sich die Betten mit COVID19-Patienten füllen). Nach dieser Schätzung werden bis etwa 8. Mai alle offenen Krankenhausbetten in den USA gefüllt sein. (Dies sagt natürlich nichts darüber aus, ob diese Betten für die Isolierung von Patienten mit einem hoch infektiösen Virus geeignet sind). Wenn wir uns beim Bruchteil der schweren Fälle um den Faktor zwei irren, ändert sich die Zeitlinie der Bettsättigung nur um 6 Tage in beide Richtungen. Wenn 20% der Fälle einen Krankenhausaufenthalt erfordern, gehen uns die Betten bis zum 2. Mai aus. Wenn nur 5% der Fälle einen Krankenhausaufenthalt erfordern, können wir es bis zum 14. Mai schaffen. Bei 2,5% schaffen wir es bis zum 20. Mai. Dies setzt natürlich voraus, dass die Nachfrage nach Betten aus anderen (nicht-COVID19) Gründen nicht steigt, was eine zweifelhafte Annahme zu sein scheint. Mit zunehmender Belastung des Gesundheitssystems, Rx-Knappheit usw. können Menschen mit chronischen Krankheiten, die normalerweise gut behandelt werden, in schwere medizinische Notlagen geraten, die eine intensive Pflege und einen Krankenhausaufenthalt erfordern.</p></blockquote><h2 id='react'>Die Reaktion einer Gemeinschaft macht den Unterschied</h2><p>Wie wir bereits diskutiert haben, ist diese Kalkulation keine Gewissheit - China hat bereits gezeigt, dass es möglich ist, die Ausbreitung durch extreme Maßnahmen zu reduzieren. Ein weiteres großartiges Beispiel für eine erfolgreiche Reaktion ist Vietnam, wo u.a. eine landesweite Werbekampagne (einschließlich eines eingängigen Liedes!) schnell die Reaktion der Gemeinde mobilisierte und dafür sorgte, dass die Menschen ihr Verhalten entsprechend anpassten.</p><p>Dies ist nicht nur eine hypothetische Situation - sie wurde bei der Grippepandemie von 1918 deutlich sichtbar. In den Vereinigten Staaten zeigten zwei Städte sehr unterschiedliche Reaktionen auf die Pandemie: Philadelphia ging mit einer riesigen Parade von 200.000 Menschen voran, um Geld für den Krieg zu sammeln. Aber St. Louis führte sorgfältig konzipierte Prozesse ein, um die sozialen Kontakte zu minimieren und so die Verbreitung des Virus einzudämmen, und sagte gleichzeitig alle Großveranstaltungen ab. So sah die Zahl der Todesfälle in jeder Stadt aus, wie in den <a href="https://www.pnas.org/content/104/18/7582">Proceedings of the National Academy of Sciences</a> gezeigt wird:</p><p align="center">
<img src="https://www.fast.ai/images/coronavirus/image4.jpeg" class="pure-img-responsive" title="Auswirkungen der unterschiedlichen Reaktionen auf die Grippepandemie von 1918"></img><br>
<caption>Auswirkungen der unterschiedlichen Reaktionen auf die Grippepandemie von 1918</caption>
</p><p>Die Situation in Philadelphia wurde extrem schlimm und erreichte sogar einen Punkt, an dem <a href="https://www.history.com/news/spanish-flu-pandemic-dead">es nicht mehr genug Bestattungssärge oder Leichenhallen gab</a>, um die große Zahl der Grippetoten zu bewältigen.</p><p>Richard Besser, der während der H1N1-Pandemie 2009 amtierender Direktor der Centers for Disease Control and Prevention war, <a href="https://www.washingtonpost.com/opinions/as-coronavirus-spreads-the-bill-for-our-public-health-failures-is-due/2020/03/05/9da09ed6-5f10-11ea-b29b-9db42f7803a7_story.html?utm_campaign=wp_week_in_ideas&utm_medium=email&utm_source=newsletter&wpisrc=nl_ideas">sagt</a>, dass in den USA "das Risiko der Exposition und die Fähigkeit, sich und seine Familie zu schützen, unter anderem vom Einkommen, dem Zugang zur Gesundheitsversorgung und dem Einwanderungsstatus abhängt". Er weist darauf hin:</p><blockquote><p>Ältere und behinderte Menschen sind besonders gefährdet, wenn ihr tägliches Leben und ihre Unterstützungssysteme gestört werden. Diejenigen, die keinen einfachen Zugang zu medizinischer Versorgung haben, einschließlich ländlicher und einheimischer Gemeinschaften, könnten in Zeiten der Not mit enormen Entfernungen konfrontiert sein. Menschen, die auf engem Raum leben - ob in öffentlichen Wohnungen, Pflegeheimen, Gefängnissen, Notunterkünften oder sogar Obdachlose auf der Straße - können in Wellen leiden, wie wir bereits im Bundesstaat Washington gesehen haben. Und die Verwundbarkeit der Niedriglohn-Gigantenwirtschaft mit nicht bezahlten Arbeitnehmern und prekären Arbeitszeiten wird während dieser Krise für alle sichtbar sein. Fragen Sie die 60 Prozent der US-Arbeitskräfte, die auf Stundenbasis bezahlt werden, wie einfach es ist, in einem Moment der Not eine Auszeit zu nehmen.</p></blockquote><p>Das US Bureau of Labor Statistics zeigt, dass <a href="https://www.bls.gov/opub/ted/2018/higher-wage-workers-more-likely-than-lower-wage-workers-to-have-paid-leave-benefits-in-2018.htm">weniger als ein Drittel der Beschäftigten</a> im untersten Einkommensbereich Zugang zu bezahltem Krankenstand haben:</p><p align="center">
<img src="https://www.fast.ai/images/coronavirus/image5.png" class="pure-img-responsive" title="Die meisten armen Amerikaner sind nicht krankgeschrieben, müssen also zur Arbeit gehen."></img><br>
<caption>Die meisten armen Amerikaner sind nicht krankgeschrieben, müssen also zur Arbeit gehen.</caption>
</p><h2 id='usa'>Wir haben keine guten Informationen in den USA</h2><p>Eines der großen Probleme in den USA ist, dass nur sehr wenige Tests durchgeführt werden und dass die Testergebnisse nicht richtig weitergegeben werden, was bedeutet, dass wir nicht wissen, was tatsächlich geschieht. Scott Gottlieb, der frühere FDA-Beauftragte, erklärte, dass in Seattle bessere Tests durchgeführt wurden und wir dort eine Infektion sehen: "Der Grund dafür, dass wir schon früh von dem Ausbruch von Covid-19 in Seattle wussten, war die Arbeit unabhängiger Wissenschaftler zur Überwachung der Sentinel-Seuche. In anderen Städten wurde eine solche Überwachung nie ganz in Gang gesetzt. Daher sind andere US-Hot-Spots möglicherweise noch nicht vollständig entdeckt worden." Laut <a href="https://www.theatlantic.com/health/archive/2020/03/how-many-americans-have-been-tested-coronavirus/607597/">The Atlantic</a> versprach Vizepräsident Mike Pence, dass "etwa 1,5 Millionen Tests" diese Woche zur Verfügung stehen würden, aber bisher wurden weniger als 2.000 Personen in den gesamten USA getestet. Robinson Meyer und Alexis Madrigal von The Atlantic beriefen sich dabei auf die Arbeit des <a href="https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRwAqp96T9sYYq2-i7Tj0pvTf6XVHjDSMIKBdZHXiCGGdNC0ypEU9NbngS8mxea55JuCFuua1MUeOj5/pubhtml">COVID Tracking Project</a>, so Robinson Meyer und Alexis Madrigal von The Atlantic:</p><blockquote><p>Die Zahlen, die wir gesammelt haben, deuten darauf hin, dass die amerikanische Reaktion auf das Covid-19 und die von ihm verursachte Krankheit COVID-19 schockierend träge war, insbesondere im Vergleich zu anderen entwickelten Ländern. Die CDC bestätigte vor acht Tagen, dass das Virus in den Vereinigten Staaten in der Gemeinschaft übertragen wurde - dass es Amerikaner infizierte, die weder ins Ausland gereist waren noch mit anderen in Kontakt standen, die es hatten. In Südkorea wurden innerhalb einer Woche nach dem ersten Fall einer Gemeindeübertragung mehr als 66.650 Personen getestet, und es wurde schnell möglich, 10.000 Personen pro Tag zu testen.</p></blockquote><p>Teil des Problems ist, dass dies zu einem politischen Thema geworden ist. Insbesondere Präsident Donald Trump hat deutlich gemacht, dass er "die Zahlen" (d.h. die Zahl der Infizierten in den USA) niedrig halten möchte. Dies ist ein Beispiel dafür, dass die Optimierung von Metriken die Erzielung guter Ergebnisse in der Praxis behindert. (Weitere Informationen zu diesem Thema finden Sie im Papier [The Ethics of Data Science: <a href="https://arxiv.org/abs/2002.08512">TThe Problem with Metrics is a Fundamental Problem for AI</a>). <a href="https://twitter.com/JeffDean/status/1236489084870119427">Jeff Dean, Leiter der KI bei Google, twitterte</a> seine Besorgnis über die Probleme der politisierten Desinformation:</p><blockquote><p>Als ich bei der WHO arbeitete, war ich Teil des Globalen AIDS-Programms (jetzt UNAIDS), das geschaffen wurde, um der Welt bei der Bekämpfung der HIV/AIDS-Pandemie zu helfen. Die Mitarbeiter dort waren engagierte Ärzte und Wissenschaftler, die sich intensiv mit der Bewältigung dieser Krise beschäftigten. In Krisenzeiten sind klare und genaue Informationen von entscheidender Bedeutung, um allen zu helfen, richtige und fundierte Entscheidungen darüber zu treffen, wie sie reagieren sollen (Regierungen von Ländern, Staaten und Gemeinden, Unternehmen, NGOs, Schulen, Familien und Einzelpersonen). Mit den richtigen Informationen und Richtlinien, um den besten medizinischen und wissenschaftlichen Experten zuzuhören, werden wir alle Herausforderungen wie die von HIV/AIDS oder COVID-19 bewältigen. Bei einer von politischen Interessen getriebenen Desinformation besteht die reale Gefahr, die Dinge noch viel schlimmer zu machen, wenn wir angesichts einer wachsenden Pandemie nicht schnell und entschlossen handeln und aktiv Verhaltensweisen fördern, die die Krankheit tatsächlich schneller verbreiten. Diese ganze Situation ist unglaublich schmerzhaft, wenn man zusehen muss, wie sie sich entwickelt.</p></blockquote><p>Es sieht nicht so aus, als ob der politische Wille vorhanden wäre, die Dinge umzukehren, wenn es um Transparenz geht. Der Gesundheits- und Sozialminister Alex Azar begann <a href="https://www.wired.com/story/trumps-coronavirus-press-event-was-even-worse-than-it-looked/">laut Wired</a> "über die Tests zu sprechen, mit denen das Gesundheitspersonal feststellt, ob jemand mit dem neuen Coronavirus infiziert ist. Der Mangel an diesen Kits hat zu einem gefährlichen Mangel an epidemiologischen Informationen über die Verbreitung und den Schweregrad der Krankheit in den USA geführt, der durch die Undurchsichtigkeit der Regierung noch verschärft wurde. Azar versuchte zu sagen, dass weitere Tests in Erwartung einer Qualitätskontrolle unterwegs seien. Aber sie machten weiter:</p><blockquote><p>Dann schnitt Trump Azar den Weg ab. "Aber ich denke, wichtig ist, dass jeder, der jetzt und gestern einen Test braucht, einen Test bekommt. Sie sind da, sie haben die Tests, und die Tests sind schön. Jeder, der einen Test braucht, bekommt einen Test", sagte Trump. Das ist nicht wahr. Vizepräsident Pence sagte den Reportern am Donnerstag, dass die USA nicht genug Testkits hätten, um die Nachfrage zu befriedigen.</p></blockquote><p>Andere Länder reagieren viel schneller und deutlicher als die USA. Viele Länder in Südostasien zeigen großartige Ergebnisse, darunter Taiwan, wo R0 jetzt auf 0,3 gesunken ist, und Singapur, das als Modell für die <a href="https://www.medpagetoday.com/infectiousdisease/covid19/85254">COVID-19-Reaktion</a> vorgeschlagen wird. Aber nicht nur in Asien; in Frankreich beispielsweise ist jede Versammlung von >1000 Personen verboten, und in drei Distrikten sind die Schulen jetzt geschlossen.</p><h2 id='end'>Zum Schluss</h2><p>Covid-19 ist ein bedeutendes gesellschaftliches Problem, und wir können und sollten alle daran arbeiten, die Ausbreitung der Krankheit einzudämmen. Das bedeutet:</p><ul><li>Vermeidung großer Gruppen und Menschenmengen</li><li>Absagen von Veranstaltungen.</li><li>Wenn möglich, von zu Hause aus arbeiten</li><li>Händewaschen beim Kommen und Gehen von zu Hause und häufig auch unterwegs</li><li>Vermeiden Sie es, Ihr Gesicht zu berühren, besonders wenn Sie sich außerhalb Ihres Hauses befinden.</li></ul><em>Hinweis: Aufgrund der Dringlichkeit, dies herauszubekommen, waren wir nicht so vorsichtig, wie wir es normalerweise gerne wären, wenn wir die Arbeit, auf die wir uns verlassen, zitieren und anerkennen würden. Bitte lassen Sie uns wissen, wenn wir etwas übersehen haben.</em><p>Vielen Dank an Sylvain Gugger und Alexis Gallagher für Rückmeldungen und Kommentare.</p><h4>Anmerkung des Übersetzers</h4><em>Covid-19, Ihre Gemeinschaft und Sie - eine datenwissenschaftliche Perspektive - Übersetzung des Artikels von Jeremy Howard und Rachel Thomas ins Deutsche. Der Originalbeitrag ist <a href="https://www.fast.ai/2020/03/09/coronavirus/#this-is-not-like-the-flu">hier</a>.
Und wenn jemand Rechtschreib-, Grammatik- oder Tippfehler sieht, und beitragen möchte, Ich habe gerade den <a href="https://github.com/multitudes/covid19/blob/master/Covid19.md">Markdown auf GitHub</a> hochgeladen (Fußnoten funktionieren dort nicht ;)</em><br>
</br><h2>Fußnoten</h2><p>(Klicken Sie ↩ auf eine Fußnote, um zu dem Ort zurückzukehren, an dem Sie sich befanden).</p><div class="footnotes">
<ol>
<li id="fn:epid"><p>
<em>Epidemiologen</em> sind Menschen, die die Ausbreitung von Krankheiten untersuchen. Es hat sich herausgestellt, dass die Einschätzung von Dingen wie Mortalität und R0 eigentlich ziemlich schwierig ist, so dass es ein ganzes Feld gibt, das sich darauf spezialisiert hat, dies gut zu tun. Wir sind misstrauisch gegenüber Menschen, die einfache Kennzahlen und Statistiken verwenden, um Ihnen zu sagen, wie sich Covid-19 verhält. Schauen Sie sich stattdessen die von Epidemiologen erstellten Modelle an. <a class="reversefootnote" href="#fnref:epid">↩</a></p>
</li>
<li id="fn:r0"><p>
Nun, technisch gesehen ist das nicht wahr. "R0" bezieht sich streng genommen auf die Infektionsrate bei Abwesenheit einer Reaktion. Aber da uns das eigentlich nie wirklich interessiert, lassen wir hier unsere Definitionen ein wenig nachlässig sein.<a class="reversefootnote" href="#fnref:r0">↩</a></p>
</li>
<li id="fn:letdown"><p>
Seit dieser Entscheidung haben wir hart daran gearbeitet, einen Weg zu finden, um den Kurs virtuell durchzuführen, und wir hoffen, dass er noch besser sein wird, als die Version vor Ort gewesen wäre. Wir konnten jetzt den Kurs für jeden auf der Welt anbieten und werden jeden Tag virtuelle Studien- und Projektgruppen durchführen.<a class="reversefootnote" href="#fnref:letdown">↩</a></p>
</li>
<li id="fn:changes"><p>
Wir haben auch viele andere kleinere Änderungen in unserem Lebensstil vorgenommen, darunter zu Hause zu trainieren, anstatt ins Fitnessstudio zu gehen, alle unsere Sitzungen auf Videokonferenz zu verlegen und Abendveranstaltungen auf die wir uns schon immer gefreut hatten, zu überspringen.<a class="reversefootnote" href="#fnref:changes">↩</a></p>
</li>
</ol>
</div>]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/ProgrammaticUISetup</guid><title>UITabbar: Comparing storyboards vs programmatic setup</title><description></description><link>https://multitudes.github.io/posts/ProgrammaticUISetup</link><pubDate>Sat, 1 Feb 2020 09:38:00 +0100</pubDate><content:encoded><![CDATA[<br></br><p>After watching the course by <a href="https://seanallen.teachable.com">Sean Allen</a> about doing your UI 100% programmatically I was inspired to write a small blog post about the two ways to implement a UITabBar.</p><p>I will start with the storyboards and then show the programmatic way. Sometimes the storyboards method feels easier especially at the beginning perhaps, but when it comes to change the UI later, the programmatic way is much more flexible. At least I recommend to try both!</p><h1>Doing your UI With StoryBoards</h1><p>Storyboards are great, but they sure take up a lot of space!</p><p>First create a new Project in Xcode. Look for <code>Tabbed App</code>.</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/8.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>Your storyboard will look like this:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/9.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>It is ready to be edited. Run the project on the simulator and try it out. Both views will look like this out of the box:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/13.png" class="pure-img-responsive" width="230" title="tabbar image"></img>
<img src="https://multitudes.github.io/images/tabbar/14.png" class="pure-img-responsive" width="230" title="tabbar image"></img>
</p><p>The Tabbar Items can be customised easily in the inspector on the right:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/10.png" class="pure-img-responsive" title="tabbar image"></img>
</p><h1>Doing your UI programmatically: Main setup</h1><p>First create a new Project in Xcode. Select on Storyboards and not SwiftUI.</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/1.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>Delete the `main.storyboard' file.</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/4.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>When in the project remove the references to the main storyboard in two places:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/2.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>Making sure to hit the enter key:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/3.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>In the General setting tab (press Enter when empty) and in the <code>plist</code> file.</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/5.png" class="pure-img-responsive" title="tabbar image"></img>
</p><h2>First steps</h2><p>In iOS 13 Apple decided to split the <code>AppDelegate.swift</code> file in two into an <code>AppDelegate.swift</code> and a <code>SceneDelegate.swift</code>. This allows for multi-windows operation. Before you had UIWindow and a window will be diaplayed. Now you have scenes. This allows to have two screens of the same app to be diswplayed at the same time, ex iPad. There is still one AppDelegate but now multiple scenes.</p><p>Inside <code>SceneDelegate.swift</code> we have a window property. It used to be in <code>AppDelegate.swift</code>. By default <code>SceneDelegate.swift</code> gives us this hint already about the window property:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/7.png" class="pure-img-responsive" title="tabbar image"></img>
</p><p>In <code>SceneDelegate.swift</code> we need to specify our <code>windowScene</code> property in the function <code>scene(willConnectTo:)</code> which is the first function to run :</p><pre><code><span class="comment">// we give a name to our variable</span>
<span class="keyword">guard let</span> windowScene = (scene <span class="keyword">as</span>? <span class="type">UIWindowScene</span>) <span class="keyword">else</span> { <span class="keyword">return</span> }
<span class="comment">// our window will be the whole screen</span>
window = <span class="type">UIWindow</span>(frame: windowScene.<span class="property">coordinateSpace</span>.<span class="property">bounds</span>)
<span class="comment">//assign windowScene to the windowScene property of our window</span>
window?.<span class="property">windowScene</span> = windowScene
<span class="comment">// assign the root controller to the window</span>
window?.<span class="property">rootViewController</span> = <span class="type">ViewController</span>()
<span class="comment">// and make it visible at start</span>
window?.<span class="call">makeKeyAndVisible</span>()
</code></pre><p>We can test it in our <code>ViewController.swift</code> file using <code>view.backgroundColor = .systemGreen</code> in <code>viewDidLoad()</code></p><pre><code><span class="keyword">class</span> ViewController: <span class="type">UIViewController</span> {
<span class="keyword">override func</span> viewDidLoad() {
<span class="keyword">super</span>.<span class="call">viewDidLoad</span>()
view.<span class="property">backgroundColor</span> = .<span class="dotAccess">systemGreen</span>
}
}
</code></pre><h2>Setup the tabbar.</h2><p>The tabbar will hold our two navigation controllers.<br>Delete the file <code>ViewController.swift</code> and edit the <code>SceneDelegate.swift</code> .<br>Add:</p><pre><code>window?.<span class="property">rootViewController</span> = <span class="type">UITabBarController</span>()
</code></pre><p>Make two new cocoa class files subclassing UIViewController and call them SearchVC and FavoritesListVC for example. As a functionality test we add a background color to their views. Feel free to choose a color:</p><pre><code> <span class="comment">// incidentally this supports the darkmode automatically!</span>
view.<span class="property">backgroundColor</span> = .<span class="dotAccess">systemPink</span>
</code></pre><p>We now create two navigation controllers for our tabbar, which will hold an array of NavigationControllers (in our case two) and each of them will have an array of ViewControllers (in this case just one).</p><p>Edit the <code>SceneDelegate.swift</code> as follows:</p><pre><code><span class="comment">// create the nav controllers</span>
<span class="keyword">let</span> searchNC = <span class="type">UINavigationController</span>(rootViewController: <span class="type">SearchVC</span>())
<span class="keyword">let</span> favoritesNC = <span class="type">UINavigationController</span>(rootViewController: <span class="type">FavoriteListyVC</span>())
<span class="comment">// create tabbar with array of nav controllers</span>
<span class="keyword">let</span> tabbar = <span class="type">UITabBarController</span>()
tabbar.<span class="property">viewControllers</span> = [searchNC, favoritesNC]
</code></pre><p>Run the project, you should be able to select two screens, with no buttons. Still you will be able to tab between the two.</p><h4>Customize and refactor</h4><p>Now we will customize the two Navigation Controllers and the tabbar.</p><pre><code><span class="keyword">func</span> createSearchNavigationController() -> <span class="type">UINavigationController</span> {
<span class="comment">// we create the view controller and insert into the nav controller and return</span>
<span class="keyword">let</span> searchVC = <span class="type">SearchVC</span>()
searchVC.<span class="property">title</span> = <span class="string">"Search"</span>
<span class="comment">// it is a system tabbaritem , tag zero because it is the first one</span>
searchVC.<span class="property">tabBarItem</span> = <span class="type">UITabBarItem</span>(tabBarSystemItem: .<span class="dotAccess">search</span>, tag: <span class="number">0</span>)
<span class="keyword">return</span> <span class="type">UINavigationController</span>(rootViewController: searchVC)
}
<span class="keyword">func</span> createFavoritesNavigationController() -> <span class="type">UINavigationController</span> {
<span class="keyword">let</span> favoritesListVC = <span class="type">FavoritesListVC</span>()
favoritesListVC.<span class="property">title</span> = <span class="string">"Favorites"</span>
<span class="comment">// it is a system tabbaritem , tag zero because it is the first one</span>
favoritesListVC.<span class="property">tabBarItem</span> = <span class="type">UITabBarItem</span>(tabBarSystemItem: .<span class="dotAccess">favorites</span>, tag: <span class="number">1</span>)
<span class="keyword">return</span> <span class="type">UINavigationController</span>(rootViewController: favoritesListVC)
}
</code></pre><p>Refactor into a tabbar function</p><pre><code><span class="keyword">func</span> createtabbar() -> <span class="type">UITabBarController</span> {
<span class="keyword">let</span> tabbar = <span class="type">UITabBarController</span>()
<span class="comment">// here we assign a tint to all our tabbars, this will be visible on the items (icons)</span>
<span class="type">UITabBar</span>.<span class="call">appearance</span>().<span class="property">tintColor</span> = .<span class="dotAccess">systemPink</span>
<span class="comment">// replace our array variables with the functions we created</span>
tabbar.<span class="property">viewControllers</span> = [<span class="call">createSearchNavigationController</span>(), <span class="call">createFavoritesNavigationController</span>()]
<span class="keyword">return</span> tabbar
}
</code></pre><p>Our <code>scene</code> function will be much cleaner and now look like:</p><pre><code><span class="keyword">func</span> scene(<span class="keyword">_</span> scene: <span class="type">UIScene</span>, willConnectTo session: <span class="type">UISceneSession</span>, options connectionOptions: <span class="type">UIScene</span>.<span class="type">ConnectionOptions</span>) {
<span class="keyword">guard let</span> windowScene = (scene <span class="keyword">as</span>? <span class="type">UIWindowScene</span>) <span class="keyword">else</span> { <span class="keyword">return</span> }
window = <span class="type">UIWindow</span>(frame: windowScene.<span class="property">coordinateSpace</span>.<span class="property">bounds</span>)
window?.<span class="property">windowScene</span> = windowScene
window?.<span class="property">rootViewController</span> = <span class="call">createtabbar</span>()
window?.<span class="call">makeKeyAndVisible</span>()
}
</code></pre><p>Otherwise you can create a separate file with a tabbar class inheriting from UITabBarController and containing the functions <code>createSearchNavigationController()</code> , <code>createFavoritesNavigationController()</code> and initialising the tab bar.</p><p>Now running in the simulator you should be able to tab between these two:</p><p align="center">
<img src="https://multitudes.github.io/images/tabbar/11.png" class="pure-img-responsive" width="230" title="tabbar image"></img>
<img src="https://multitudes.github.io/images/tabbar/12.png" class="pure-img-responsive" width="230" title="tabbar image"></img>
</p><h2>Custom Tabbar Items</h2><p>As per Swift5.1 and Xcode the title and image of the system tab bar items cannot be changed. You cannot use the Apple icon and use a different title. They are defined as enum in obj c as follow. This is the list of the system options:</p><pre><code>typedef <span class="keyword">enum</span> UITabBarSystemItem : <span class="type">NSInteger</span> {
<span class="keyword">case</span> more, favorites, featured, topRated, recents, contacts,
history, bookmarks, search, downloads ,mostRecent, mostViewed
} <span class="type">UITabBarSystemItem</span>;
</code></pre><p>This has a reason. Some icons are immediately recognisable across the whole iOS ecosystem and will have the same meaning to everyone, therefore it is not possible to use them differently than what for they were intended to. Also in this way Apple creates a better user experience across all its apps.</p><p>To use a custom icon and title select an it from in the storyboards like this:</p><p><a href="https://i.stack.imgur.com/PG4aG.png"><img src="https://i.stack.imgur.com/PG4aG.png" alt="itemWithStoryboards"/></a></p><p>The SF Symbols introduced in WWDC2019 are a great option. Download SF Symbols from the <a href="https://developer.apple.com/design/human-interface-guidelines/sf-symbols/overview/">Apple website</a>. Then open the app and select a symbol you like. Copy the name with CMD-Shift-C. Assign an icon and text to a Tabbar Item using SF symbols like <code>"person.fill"</code> for the icon ane "Profile" for title as below:</p><pre><code><span class="keyword">func</span> createProfileNC() -> <span class="type">UINavigationController</span> {
<span class="keyword">let</span> profileVC = <span class="type">ProfileVC</span>()
profileVC.<span class="property">title</span> = <span class="string">"My Profile"</span>
profileVC.<span class="property">tabBarItem</span> = <span class="type">UITabBarItem</span>(title: <span class="string">"Profile"</span>, image: <span class="type">UIImage</span>(systemName: <span class="string">"person.fill"</span>), tag: <span class="number">2</span>)
<span class="keyword">return</span> <span class="type">UINavigationController</span>(rootViewController: profileVC)
}
</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/Delegates-Protocols</guid><title>Delegate Pattern in Swift easy explained</title><description></description><link>https://multitudes.github.io/posts/Delegates-Protocols</link><pubDate>Sat, 25 Jan 2020 09:38:00 +0100</pubDate><content:encoded><![CDATA[<p>Delegation is a very common Design Pattern in iOS. For beginners can be a bit difficult to understand at first. Here is a quick rundown to get it working quickly.</p><p>From the <a href="https://docs.swift.org/swift-book/LanguageGuide/Protocols.html">swift documentation</a>:</p><blockquote><p>Delegation is a design pattern that enables a class or structure to hand off (or delegate) some of its responsibilities to an instance of another type. This design pattern is implemented by defining a protocol that encapsulates the delegated responsibilities, such that a conforming type (known as a delegate) is guaranteed to provide the functionality that has been delegated. Delegation can be used to respond to a particular action or to retrieve data from an external source without needing to know the underlying type of that source.</p></blockquote><p>There is often an analogy with the boss and the intern.</p><p>I made a small example below. Our app has two screens. The first screen changes its colour based on the choice taken by the user on the second screen. How do these two views communicate?</p><p>In this case, we can think of the first screen (the BaseScreen) as our intern waiting to be told what to do. The second screen ( the SelectionScreen) is the boss telling the first screen to change its colour.</p><p>The two ViewColtrollers are as a starting point:</p><h4>The BaseScreen</h4><pre><code><span class="keyword">class</span> BaseScreen: <span class="type">UIViewController</span> {
<span class="comment">//this button will bring me to the SelectionScreen</span>
<span class="keyword">@IBOutlet weak var</span> chooseButton: <span class="type">UIButton</span>!
<span class="keyword">override func</span> viewDidLoad() {
<span class="keyword">super</span>.<span class="call">viewDidLoad</span>()
}
<span class="keyword">@IBAction func</span> chooseButtonTapped(<span class="keyword">_</span> sender: <span class="type">UIButton</span>) {
<span class="keyword">let</span> selectionVC = storyboard?.<span class="call">instantiateViewController</span>(withIdentifier: <span class="string">"SelectionScreen"</span>) <span class="keyword">as</span>! <span class="type">SelectionScreen</span>
<span class="call">present</span>(selectionVC, animated: <span class="keyword">true</span>, completion: <span class="keyword">nil</span>)
}
}
</code></pre><h4>The SelectionScreen</h4><p>This view will show two buttons, pressing one of them will just dismiss the view controller and return to the BaseScreen.</p><pre><code><span class="keyword">class</span> SelectionScreen: <span class="type">UIViewController</span> {
<span class="keyword">override func</span> viewDidLoad() {
<span class="keyword">super</span>.<span class="call">viewDidLoad</span>()
}
<span class="keyword">@IBAction func</span> redButtonTapped(<span class="keyword">_</span> sender: <span class="type">UIButton</span>) {
<span class="call">dismiss</span>(animated: <span class="keyword">true</span>, completion: <span class="keyword">nil</span>)
}
<span class="keyword">@IBAction func</span> blueButtonTapped(<span class="keyword">_</span> sender: <span class="type">UIButton</span>) {
<span class="call">dismiss</span>(animated: <span class="keyword">true</span>, completion: <span class="keyword">nil</span>)
}
}
</code></pre><h4>The steps to implement the delegation pattern are:</h4><ul><li>create a protocol with a function declaration. This function is only declared in the protocol. It will be called in the selection screen and will be implemented in the base screen.</li></ul><pre><code><span class="keyword">protocol</span> ColorChangeDelegate {
<span class="keyword">func</span> didChooseColor(color: <span class="type">UIColor</span>)
}
</code></pre><ul><li>Our main screen, the base screen, will conform to that delegate protocol and implement the function declared in the protocol.</li></ul><pre><code><span class="keyword">extension</span> <span class="type">BaseScreen</span>: <span class="type">ColorChangeDelegate</span> {
<span class="keyword">func</span> didChooseColor(color: <span class="type">UIColor</span>) {
view.<span class="property">backgroundColor</span> = color
}
}
</code></pre><p>So when the base screen will present the SelectionViewController, it will set itself as its delegate.</p><pre><code><span class="keyword">class</span> BaseScreen: <span class="type">UIViewController</span> {
<span class="keyword">@IBOutlet weak var</span> chooseButton: <span class="type">UIButton</span>!
<span class="keyword">override func</span> viewDidLoad() {
<span class="keyword">super</span>.<span class="call">viewDidLoad</span>()
}
<span class="keyword">@IBAction func</span> chooseButtonTapped(<span class="keyword">_</span> sender: <span class="type">UIButton</span>) {
<span class="keyword">let</span> selectionVC = storyboard?.<span class="call">instantiateViewController</span>(withIdentifier: <span class="string">"SelectionScreen"</span>) <span class="keyword">as</span>! <span class="type">SelectionScreen</span>
<span class="comment">// this is where we assign the base controller as the delegate of the next screen</span>
selectionVC.<span class="property">colorDelegate</span> = <span class="keyword">self</span>
present(selectionVC, animated: <span class="keyword">true</span>, completion: <span class="keyword">nil</span>)
}
}
</code></pre><ul><li>The selection screen will call the delegate function when the button has been tapped but not implement what will happen in the implementation:</li></ul><pre><code><span class="keyword">class</span> SelectionScreen: <span class="type">UIViewController</span> {
<span class="comment">// this is the declaration of my delegate. It is unwrapped because it is initialised in the previous screen</span>
<span class="keyword">var</span> colorDelegate: <span class="type">ColorChangeDelegate</span>!
<span class="keyword">override func</span> viewDidLoad() {
<span class="keyword">super</span>.<span class="call">viewDidLoad</span>()
}
<span class="keyword">@IBAction func</span> redButtonTapped(<span class="keyword">_</span> sender: <span class="type">UIButton</span>) {
<span class="comment">// if I tap this button I call the method didChooseColor(color:) on my delegate
// Guess who is the delegate? Thats right, BaseScreen!</span>
colorDelegate.<span class="call">didChooseColor</span>(color: .<span class="dotAccess">red</span>)
<span class="call">dismiss</span>(animated: <span class="keyword">true</span>, completion: <span class="keyword">nil</span>)
}
<span class="keyword">@IBAction func</span> blueButtonTapped(<span class="keyword">_</span> sender: <span class="type">UIButton</span>) {
<span class="comment">// same here as above</span>
colorDelegate.<span class="call">didChooseColor</span>(color: .<span class="dotAccess">blue</span>)
<span class="call">dismiss</span>(animated: <span class="keyword">true</span>, completion: <span class="keyword">nil</span>)
}
}
</code></pre><p>Now tapping on one of the two button will dismiss the SelectionScreen and will change the color on the delegate view which is our BaseScreen.</p><p>This was a very basic explanation. The code is on <a href="https://github.com/multitudes/Delegates-Protocols/tree/master">GitHub</a> I hope this simple example made things a bit clearer about delegation in Swift.</p>]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/What%20I%20learned%20from%20the%20AoC%20for%20Swift%20Playgrounds</guid><title>What I learned from doing the Advent of Code 2019</title><description></description><link>https://multitudes.github.io/posts/What%20I%20learned%20from%20the%20AoC%20for%20Swift%20Playgrounds</link><pubDate>Fri, 27 Dec 2019 09:38:00 +0100</pubDate><content:encoded><![CDATA[<h1>What I learned from doing the Advent of Code 2019</h1><p align="center">
<img src="https://multitudes.github.io/images/aoc/aoc0-sm.png" class="pure-img-responsive" title="aoc"></img>
</p><p>At the beginning of the month, I had to study hard for the IHK German Chamber Of Commerce Software developer exams. After passing those I was able to focus on different things again. This is when I found out about the Advent Of Code. To spend more time just writing code and algorithms in the language of my choice has been a great way to relax and have some fun!<br><br>I did many tutorials with Swift before and the experience has always been guided. Of course, it has to be like this, nothing wrong with the tutorials. It is just that I never really did put into practice what I was learning. The advent of code has allowed me to approach and deal with problems. I was almost about to say real-world problems, but really to rescue Santa cannot be considered a real-world problem, can it?</p><p>Let me explain. I love to take a book and practice algorithms. This is a different experience than, say: "I need to calculate this, how can I do it?", and then spending a few hours looking for a solution.</p><p>The <a href="https://adventofcode.com">Advent of code 2019</a> in December 2019 has been a great experience to practice all that I learned with Swift in 2019.</p><p>I enjoyed the challenges and especially the realisations that came with them. The puzzles were not easy for me. I cannot believe 100 people a day could manage to solve those in less than one hour.<br>I loved the Reddit with the discussions and the poems!<br>Not only people get to solve these much quicker than me but they manage to write a poem too. I thought this was wonderful, also I enjoyed to see that people used so many different programming languages to solve the challenges, from Bash, to Go, to Excel, even ARMv7-M Assembly!<br><br>It is incredibly inspiring to see so many skills in actions and look at each other code, even if I do not understand most of it.<br>With time I found two other developers using Swift and the solutions were implemented using packages and templates, allowing the code to be run in Terminal on Mac or Linux. I thought this was interesting because Swift is open source and I still rely too much on the Apple platform.<br><br>This LaurentBrusa article is still in progress and will be for a while, because at the time of writing I did not finish the <a href="https://adventofcode.com">Advent of code 2019</a>. I managed to get to day 15 and then life got in the way.<br><br>I am just not fast enough I guess. In 2020 I will put all my energies to prepare myself for hopefully a new job as a Software developer. Priority are editing my Website, write my CV and apply for jobs. I will put the challenges aside for the moment being!</p><h4>What can go wrong using the Swift Playgrounds on Xcode?</h4><p>My programming language of choice is Swift and I thought the Xcode Playgrounds would be ideal for this. I did not know what would expect me haha! At the end of the article below, I will explain more in detail the pro and contra of using the playgrounds for the challenges.</p><p>I collected all the challenges in one big playground with multiple pages and published it on <a href="https://github.com/multitudes/Advent-of-Code-2019">Github</a> to document my journey.</p><p>Xcode playground can insert markdown and create a nice layout to introduce and describe the challenges and explain the code. They are similar to the Notebooks in Python, and it seemed to be a great feature and a bonus. As seen in the images this is an example of markdown rendering and navigation in the playgrounds. I assigned a shortcut to go from the editing mode to the rendered mode.<br><br>From:</p><p align="center">
<img src="https://multitudes.github.io/images/aoc/markdown0.png" class="pure-img-responsive" title="aoc"></img>
</p><p>To:</p><p align="center">
<img src="https://multitudes.github.io/images/aoc/markdown1.png" class="pure-img-responsive" title="aoc"></img>
</p><p>Many WWDC scholarships submission are done with the playgrounds and its use is also actively encouraged by Apple. Swift is a modern language said to be as fast as C and offer many nice features like functional programming. Sounds too good to be true! Let's get started!</p><h4>Day 1</h4><p>This was just to get the party going.<br><br>The leaderboards have been unlocked in 4 minutes 12 seconds, which mean it took so long for the first person to get to the solution.<br>Especially for people starting doing the challenges to learn a new language, the first day is helping to get the system ready, to get comfortable doing all basic operations required by the later challenges like reading and saving files from a disk, display solution, print to screen etc.</p><p>Not difficult at all but it took a long time for me to understand how certain things in Playground works.<br><br>For instance, where do I put the input file? In the Sources or Resources folder? Not a trivial question. The Sources folder is for the source code, but it would work for any other resources too. However, the correct way is to put the input.txt file for the puzzle in Ressources! The Sources folder is for the code. Swift files will not be recognized if put in the wrong folder. This I learned!</p><p align="center">
<img src="https://multitudes.github.io/images/aoc/sourceFolder.png" class="pure-img-responsive" title="aoc"></img>
</p><h4>Day 2: Meet the IntCode Computer</h4><p>The IntCode Computer is at the heart of many AoC 2019 challenges.<br><br>It starts on day 2:</p><blockquote><p>An Intcode program is a list of integers separated by commas (like 1,0,0,3,99). To run one, start by looking at the first integer (called position 0). Here, you will find an opcode - either 1, 2, or 99. The opcode indicates what to do; for example, 99 means that the program is finished and should immediately halt. Encountering an unknown opcode means something went wrong.For example, suppose you have the following program:1,9,10,3,2,3,11,0,99,30,40,50The first four integers, 1,9,10,3, are at positions 0, 1, 2, and 3. Together, they represent the first opcode (1, addition), the positions of the two inputs (9 and 10), and the position of the output (3). To handle this opcode, you first need to get the values at the input positions: position 9 contains 30, and position 10 contains 40. Add these numbers together to get 70. Then, store this value at the output position; here, the output position (3) is at position 3, so it overwrites itself.</p></blockquote><p>This challenge has been continued in Day 5 where we got more opcodes and in day 7 where we had to use the IntCode Computer to activate out thrusters to reach Santa on time:</p><blockquote><p>The Elves have sent you some Amplifier Controller Software (your puzzle input), a program that should run on your existing Intcode computer. Each amplifier will need to run a copy of the program.</p></blockquote><p>And finally in day 9 came what resulted for many to be a watershed moment in the advent of code:</p><blockquote><p>Your existing Intcode computer is missing one key feature: it needs support for parameters in relative mode.Parameters in mode 2, relative mode, behave very similarly to parameters in position mode: the parameter is interpreted as a position. Like position mode, parameters in relative mode can be read from or written to.The important difference is that relative mode parameters don't count from address 0. Instead, they count from a value called the relative base. The relative base starts at 0.</p></blockquote><p>This challenge somehow came as a shock and became a roadblock for me and many others like is to be seen on the <a href="https://www.reddit.com/r/adventofcode/comments/e85b6d/2019_day_9_solutions/fa9e8av?utm_source=share&utm_medium=web2x">reddit forum</a></p><p align="center">
<img src="https://multitudes.github.io/images/aoc/aoc1.png" class="pure-img-responsive" title="aoc"></img>
</p><p>The first big problem came on the day 9 part 1 challenge.<br>Many forgot to implement the relative mode on the third parameter, which had been mostly implicitly left out till now.<br>This would give ‘203’ as output. I thought there is an element of genius in how these challenges are made. :)</p><p>I found out and corrected the code. Everything was working correctly now.<br>Perhaps not as elegantly as I wanted but still, this is my first AoC and I am no professional developer after all. Every smaller test file with IntCode was working.. but part two got caught in an infinite loop or so I thought. Nobody else had this problem. I read and reread the challenge description including all possible hints on the AoC website and the Reddit.</p><p>Especially this sentence below had me thinking for a long time.</p><blockquote><p>Part two does have a purpose. I'm not telling exactly what it is yet, but it's an entirely different kind of test than part 1 did. If you got the right answer, your Intcode implementation passed that test. If you got part 1, you should have gotten part 2 for free, but virtual machines are persnickety critters.</p></blockquote><p>and</p><blockquote><p>Once run, it will boost the sensors automatically, but it might take a few seconds to complete the operation on slower hardware.</p></blockquote><p>A few seconds...? I have a mac mini 2018 with an i7 intel 6 cores... It cannot run for 10 minutes?<br><br>I spent hours on this inserting more and more print statements (which ironically would just make my playground slower). I had a bad feeling. Part two would not give me any output and I could not find the bug. I had a break from the IntCode!<br><br>Desperation is a part of doing the challenges. In my case it turned out that the playgrounds as I was using them were too slow!</p><h4>Day 3: Crossed Wires</h4><p>The nice thing about this challenge is the understanding that came with using the right collection type in Swift.<br>For this Sets were the best solution to be quick and efficient.<br>At day 3 I started to refactor code and use a separate swift file for functions and structs in the Sources folder.</p><h4>Day 4: Secure Container</h4><p>Cryptography! Create a sequence of numbers and check how many meet the selected criteria. Good fun, I thought it would be easier then I had an "aha" moment. My solution was wrong. I had to think harder, and this is part of the fun :) I printed the sequences to debug and could see that this slowed down the code quite a bit. I took out the print statements to see if the speed would improve.<br>When writing the this article I actually went back and recorded this gif:</p><p align="center">
<img src="https://multitudes.github.io/images/aoc/aocday4-slow.gif" class="pure-img-responsive" title="aoc"></img>
</p><p>This is slow!</p><p>I did not realize at the time how bad this was. I later understood this code would run instantaneously on Swift but not on the playground page... But the worse was yet to come!</p><h4>Day 5 - Sunny with a Chance of Asteroids</h4><p>Again fun times with the IntCode computer!</p><h4>Day 6 - Universal Orbit Map</h4><p>A classic linked lists problem.</p><h4>Day 7 - Amplification Circuit</h4><p>Using the IntCode computer! It gets more and more obvious that who had already at the beginning a modular implementation of the computer will be at advantage!</p><p>I did miss this and did not spend more time at the beginning refactoring this code if I did so the later challenges would have been easier!</p><h4>Day 8: Space Image Format</h4><p>This was neat and not too difficult.</p><h4>Day 9: Sensor Boost</h4><p>Getting dirty!<br>The new feature of the IntCode Computer! > Your existing Intcode computer is missing one key feature: it needs support for parameters in relative mode.</p><p>This will bring some havoc 😄</p><h4>Day 10</h4><p>This challenge incidentally had a great learning effect on me. I had to stop and learn more about angles and radiants including the atan2() function! Great stuff and honestly not trivial, but I am happy that I understood now how to reverse the direction and calculate the angles of my laser beam to destroy those asteroids.</p><h4>Day 11</h4><p>Had me using the IntCode computer to manoeuvre a Hull painting robot. It worked! and that made me happy for the day!</p><h4>Day 12</h4><p>A challenge to determine when 4 Jupyter moons would get into their initial state again. Of course, that had to be done more smartly and I came across a modified version of the greatest common factor algorithm, also called <a href="https://en.wikipedia.org/wiki/Euclidean_algorithm">Euclidean algorithm</a> to calculate the <a href="https://en.wikipedia.org/wiki/Least_common_multiple">Least Common Multiple</a>. This has been a bonus learning experience.</p><blockquote><p>The space near Jupiter is not a very safe place; you need to be careful of a big distracting red spot, extreme radiation, and a whole lot of moons swirling around. You decide to start by tracking the four largest moons: Io, Europa, Ganymede, and Callisto.After a brief scan, you calculate the position of each moon (your puzzle input). You just need to simulate their motion so you can avoid them.Each moon has a 3-dimensional position (x, y, and z) and a 3-dimensional velocity. The position of each moon is given in your scan; the x, y, and z velocity of each moon starts at 0.Simulate the motion of the moons in time steps. Within each time step, first update the velocity of every moon by applying gravity. Then, once all moons' velocities have been updated, update the position of every moon by applying velocity. Time progresses by one step once all of the positions are updated.</p></blockquote><p>However, I had to cross a till now invisible to me and unforeseen barrier with my Xcode Playgrounds. The challenge is about performant code!</p><blockquote><p>Determine the number of steps that must occur before all of the moons' positions and velocities exactly match a previous point in time.For example, the first example above takes 2772 steps before they exactly match a previous point in time; it eventually returns to the initial state.Of course, the universe might last for a very long time before repeating.This set of initial positions takes 4686774924 steps before it repeats a previous state! You might need to find a more efficient way to simulate the universe.</p></blockquote><p>Part 2 of the challenge went into a loop again. All smaller test position has been debugged, the code seemed correct. I spent days on it. Xmas eve was in between so I stopped worrying and had a break. Eventually, after Xmas, I had a go at it again. Again I looked in Reddit for hints. The solution, which again was pure genius and I would probably have missed it, consists of calculating every axe independently, calculate the number of steps for it to go into the original position and velocity, then check the next axe and the next. The result is then the lowest common multiplicator or LCM of the three orbits! I did this but the third orbit would just go on forever. I thought this is wrong. I was growing increasingly frustrated. I thought this is the end of my challenges. I spent hours on it. Until I left it running and made a coffee, then had a phone call. And when I came back the answer was staring at me:</p><blockquote><p>376243355967784</p></blockquote><p>Beautiful but how is that possible? I finally googled specifically for "How slow are playgrounds compared to Swift?" I found this enlightening answer on SO <a href="https://stackoverflow.com/questions/55303853/recursive-algorithm-is-much-slower-in-playgrounds-1-minute-than-xcode-0-1-sec">recursive-algorithm-is-much-slower-in-playgrounds</a> and <a href="https://stackoverflow.com/a/47542545/9497800">swift-playground-execution-speed</a></p><p>Of course, the most performant way is to make all the performance-critical code in a Swift file inside the Sources folder in the playground, but I never imagined this would make such a big difference!</p><p>This below is another example of how the Playgrounds can be unexpectedly quirky!</p><pre><code><span class="keyword">var</span> count = <span class="number">0</span>
<span class="keyword">for</span> i <span class="keyword">in</span> <span class="number">0</span>..<<span class="number">1_000_000_000</span> {
(count += <span class="number">1</span>) <span class="comment">// try to execute the code with or without the parenthesis! 🤯</span>
<span class="keyword">if</span> count % <span class="number">100_000</span> == <span class="number">0</span> {
<span class="comment">// print only every 100_000th loop iteration</span>
<span class="call">print</span>(count)
}
}
</code></pre><p>Without the parenthesis: about 10.000 loop iterations per second With parenthesis: about 10.000.000 loop iterations per second !!!</p><h4>Day 13-15</h4><p>After Xmas, I managed to do three more days but the momentum had somewhat gone. I know it is important to get going but I just feel guilty to spend so much time on challenges when I should put my energy into finding a job. This comes with preparing my CV, cleaning up my portfolio and doing a few more iOS project to show. Right now that is all that I can concentrate on. So I stopped on Day 15.</p><h1>Get more Performance!</h1><ul><li>Put all your classes and structs in a swift or in a few .swift files and add them to the sources folder. All the classes and structs need to be marked public or the main code will not see them. This will speed up the performance considerably.</li><li>Use less print statements.</li><li>Use macOs playgrounds (there an option for that in the inspector)</li></ul><h2>The disadvantages to using Xcode Playgrounds for Swift</h2><ul><li>Once you put your classes and structs in sources there is a message that you will get all the time like: "... is inaccessible due to internal protection level". This informs you that you need to make every single property and methods public. That's a lot of extra typing because the structs will need a public initializer. Usually, you would get one for free, well they do need one now.</li><li>Putting the code in the source makes it very slow to debug, the playground needs minutes(!) to understand that you made a change in a Swift file in sources, or that a struct that was in the main file previously, is now in the Sources folder.</li><li>I had to restart Xcode a lot doing the above. Xcode again needs quite a long time to understand that the permission has been changed from internal to public.</li><li>Playgrounds do not have debuggers. An easy option is to use print statements and these make your code slow.</li><li>There is no readLine function in the Playgrounds, so strangely it is not possible to input anything in your running program. All languages since the '70s and before having this possibility obviously, I believe the Playgrounds have always been a way to quickly test some software, so the inputs are hardcoded in the software you write, still, it is puzzling!</li><li>The graphical possibilities using the console are limited. There are other ways to have a graphical interface but this requires some more knowledge of the macOS or iOS UIKit and it is not trivial. For instance, I can draw a maze in the console using the print functions with a carriage return and ASCII characters like "-" and "#" but if I wanted to clear the console? There is no such thing as a clear command and I did look and try! I do miss things which are long been available in Bash and Terminal shells and for a reason! Because they are useful!<ul></ul></li></ul>]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/Introducing%20ARKit3</guid><title>Introducing ARKit3</title><description></description><link>https://multitudes.github.io/posts/Introducing%20ARKit3</link><pubDate>Sat, 6 Jul 2019 09:38:00 +0200</pubDate><content:encoded><![CDATA[<p>This is an extract of the Apple Developer Keynote's talk <a href="https://developer.apple.com/videos/play/wwdc2019/604/">Introducing ARKit3</a> for my own enjoyment and learning</p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/1.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>ARKit provides three pillars of functionalities.</p><h3>Tracking</h3><p>It determines where your device is respect to the environment, so that virtual content can be positioned and updated correctly in real time.<br>This creates the illusion that the virtual content is placed in the real world.<br>ARKit provides different tracking technologies such as worldtracking, facetracking and imagetracking.</p><h3>Scene understanding</h3><p>This sits on top of tracking and with it you can identify surfaces, images and 3D Objects in the sceneand attach virtual content right on them.<br>Also learns about the lighting and texture to help make the content more realistic.</p><h3>Rendering</h3><p>Brings the 3D content to life.<br>There are three main renderers: SceneKit, SpriteKit, Metal ...<br><br></p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/2.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>and from this year Reality Kit, designed for AR.<br><br></p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/3.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><h2>New Features in ARKit3</h2><p>Some of them are Visual Coherence, Positional Tracking, Simultaneous Front and Back Camera, Record and Replay of Sequences, More Robust 3D Object Detection, Multiple-face Tracking, HDR Environment Textures, Faster Reference Image Loading, Motion Capture, Detect up to 100 Images, Face Tracking Enhancements, People Occlusion, Raycasting, Collaborative Session, ML Based Plane Detection, New Plane Classes, RealityKit Integration, AR Coaching UI.</p><h3>People Occlusion</h3><p align="center">
<img src="https://multitudes.github.io/images/arkit3/4.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>(Available on A12 and later)<br><br>Enables virtual content to be rendered behind people and works for multiple people in the scene, for fully and partially visible people and integrates with ARView and ARSCNView.<br><br>To produce a convincing AR experience is important to position the content accurately and also to match the world lighting.<br>When people are in the frame it can quickly break the illusion because when they are in the front, they are expected to cover the model. With people occlusion this problem is solved.<br>Virtual content by default is rendered on top of the camera image. Thanks to machine learning it recognize people placed in the frame and then creates a separate layer including only the pixel representing the people. We call this segmentation. Then we can render that layer on top of everything else. But this would not enough. ARKit uses machine learning to make an additional depth estimation test to understand how far the segmented people are in regard to the camera and to make sure to render only people up front if they are actually closer to the camera. Thanks to the power of the Neural Engine in the A12 chip we are able to do it in every frame in real time.<br><br></p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/5.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>Let's see it in code:<br>We have a new property on ARConfiguration called FrameSemantics.<br><br>This will give you different semantic information of what is in the current frame.</p><pre><code>
<span class="keyword">class</span> ARConfiguration : <span class="type">NSObject</span> {
<span class="keyword">var</span> frameSemantics: <span class="type">ARConfiguration</span>.<span class="type">FrameSemantics</span> { <span class="keyword">get set</span> }
<span class="keyword">class func</span> supportsFrameSemantics(<span class="type">ARConfiguration</span>.<span class="type">FrameSemantics</span>) -> <span class="type">Bool</span>
}
</code></pre><p>Specific for People Occlusion there are two methods available:<br>One option is personSegmentation.<br>This is the best when you know people will be always be in the front.</p><pre><code><span class="keyword">let</span> configuration = <span class="type">ARWorldTrackingConfiguration</span>()
configuration.<span class="property">frameSemantics</span> = .<span class="dotAccess">personSegmentation</span>
session.<span class="call">run</span>(configuration)
</code></pre><p>The other option is person segmentation with depth.<br>This is best if people will be either behind or in the front</p><pre><code><span class="keyword">let</span> configuration = <span class="type">ARWorldTrackingConfiguration</span>()
configuration.<span class="property">frameSemantics</span> = .<span class="dotAccess">personSegmentationWithDepth</span>
session.<span class="call">run</span>(configuration)
</code></pre><p>For advanced user using Metal you can access the data of the segmentation on every frame like this:</p><pre><code><span class="keyword">open class</span> ARFrame : <span class="type">NSObject</span>, <span class="type">NSCopying</span> {
<span class="keyword">open var</span> segmentationBuffer: <span class="type">CVPixelBuffer</span>? { <span class="keyword">get</span> }
<span class="keyword">open var</span> estimatedDepthData: <span class="type">CVPixelBuffer</span>? { <span class="keyword">get</span> }
}
</code></pre><p>Let's see an example of using the API.<br>In <code>viewDidLoad</code> you create an anchor entity looking for horizontal planes and adding it to a scene.<br>Then you retrieve the <code>url</code> of the model to load and load it using loadModelAsync in asynchronous model.<br>We add the entity as a child of our anchor and also add support for gestures.<br>This will automatically set up a WorldTracking configuration thanks to realityKit.</p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/6.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>I want to implement a toggle that switches occlusion on and off with a simple tap.<br>We need always to check if the device supports FrameSemantic and gracefully handle that case.<br><br></p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/7.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>And add the toggle function<br></p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/8.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><h2>Motion Capture</h2><p>(Available on A12 and later) Tracks human body in 2D and 3D.<br>You can track the body of a person and skeleton representation which can then be mapped to a virtual character in real time. This is made possible by advanced Machine Learning algorythms</p><h3>2D Body Detection</h3><p>We have a new framesemntics option called .bodyDetection This is supported on the WorldTrackingConfiguration and on the Image and Orientation tracking configuration</p><pre><code><span class="keyword">let</span> configuration = <span class="type">ARWorldTrackingConfiguration</span>()
configuration.<span class="property">frameSemantics</span> = .<span class="dotAccess">bodyDetection</span>
session.<span class="call">run</span>(configuration)
</code></pre><p>let's have a look at the data you will getting back.</p><p align="center">
<img src="https://multitudes.github.io/images/arkit3/9.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>Every ARFrame delivers an object of type ARBody2D in the dectected body property if a erson was detected. This object contains a 2D Skeleton ARSkeleton2D and it will provide you with all the joints landmarks in normalized image space. They are been returned in a flat hierarchy in an array because this is more efficient for processing, but you also will be getting a skeleton definition and there you have all the information about how to interpret the skeleton data.<br>In particular it contains information about the hierarchy of joints. Like the fact that the hand joint is a child of the elbow joint.</p><h3>3D Motion capture</h3><p align="center">
<img src="https://multitudes.github.io/images/arkit3/10.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>Tracks a human body pose in 3D space and provides a 3D skeleton representation with scale estimation to let you determine the size of the person that is being tracked and it is anchored in world coordinates. We are introducing a new configuration called ARBodyTrackingConfiguration. The frame semantics is turned on by default in that configuration. In addition it trackes device position and orientation and selected worldtrcking features such as plane estimation or image detection. In code:</p><pre><code><span class="keyword">if</span> <span class="type">ARBodyTrackingConfiguration</span>.<span class="property">isSupported</span> {
<span class="keyword">let</span> configuration = <span class="type">ARBodyTrackingConfiguration</span>()
session.<span class="call">run</span>(configuration) }
</code></pre><p>When ARKit is running and detects a person it will detect a new type of anchor, an ARBodyAnchor. This will provided in the session anchor call back like other anchor types you know. It also has a transform with the position and orientation of the detected person in Worldcoordinates, in addition you get the scale factor and a reference to the 3d skeleton:</p><pre><code><span class="keyword">open class</span> ARBodyAnchor : <span class="type">ARAnchor</span> {
<span class="keyword">open var</span> transform: <span class="call">simd_float4x4</span> { <span class="keyword">get</span> }
<span class="keyword">open var</span> estimatedScaleFactor: <span class="type">Float</span> { <span class="keyword">get</span> }
<span class="keyword">open var</span> skeleton: <span class="type">ARSkeleton3D</span> { <span class="keyword">get</span> }
}
</code></pre><p>The yellow joints are the ones which will be delivered to the users with motion capture data. The white ones are leaf joints, additionally available in the skeleton but these are not actively tracked. Labels are available and in the API you can query them by their particular name.</p><p>One particular use case is animate a 3d character</p><h3>Animating 3D Characters</h3><p>You will need a rigged mesh. To do this in code with a realityKit API. First you create an anchor entity of type body and add this anchor to the scene (1) Then load the model (2) and use the asynchronous loading API for that and in the completion handler you will be getting the boidy tracked entity that you just need to add to add as a child to our body anchor. In this way the pose of the keleton will be applied to the model in real time</p><pre><code><span class="comment">// Animating a 3D character with RealityKit
// Add body anchor (1)</span>
<span class="keyword">let</span> bodyAnchor = <span class="type">AnchorEntity</span>(.<span class="dotAccess">body</span>)
arView.<span class="property">scene</span>.<span class="call">addAnchor</span>(bodyAnchor)
<span class="comment">// Load rigged mesh (2)</span>
<span class="type">Entity</span>.<span class="call">loadBodyTrackedAsync</span>(named: <span class="string">"robot"</span>).<span class="call">sink</span>(receiveValue: { (character) <span class="keyword">in</span>
<span class="comment">// Assign body anchor</span>
bodyAnchor.<span class="call">addChild</span>(character)
})
</code></pre><h3>Simultaneous Front and Back Camera</h3><p align="center">
<img src="https://multitudes.github.io/images/arkit3/11.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>Enables World Tracking with face data Enables Face Tracking with device orientation and position Supported on A12 and later</p><p>ARKit lets you do Worldtracking on the back facing camera and face tracking with the true depth system on the front. You can build AR experiences using front and back camera at the same time. And face tracking experiences that make use of the device orientation and position. All this is supported on A12 and later. Example. We run world tracking with plane estimation and also we placed a face mesh on top of the plane and updating it in real time with facial expressions captured through the front camera.</p><p>Lets see the api First we create a world tracking configuration. This will determines which camera stream will be displayed on the screen. This will be the backfacing camera. Now I am turning on the new face tracking enabled property and run the session. This will cause to receive face anchors and I can use any information from that anchor like face mesh and anchor transform etc.<br>Since we are working with world coordinates the user face transform will be placed behind the camera, so in order to visualize the face you will need to transform to a location somewhere in front of the camera.</p><pre><code><span class="comment">// Enable face tracking in world tracking configuration</span>
<span class="keyword">let</span> configuration = <span class="type">ARWorldTrackingConfiguration</span>()
<span class="keyword">if</span> configuration.<span class="property">supportsUserFaceTracking</span> {
}
configuration.<span class="property">userFaceTrackingEnabled</span> = <span class="keyword">true</span>
session.<span class="call">run</span>(configuration)
<span class="comment">// Receive face data</span>
<span class="keyword">func</span> session(<span class="keyword">_</span> session: <span class="type">ARSession</span>, didAdd anchors: [<span class="type">ARAnchor</span>]) {
<span class="keyword">for</span> anchor <span class="keyword">in</span> anchors <span class="keyword">where</span> anchor <span class="keyword">is</span> <span class="type">ARFaceAnchor</span> {
}
...
}
</code></pre><p>Lets see the face tracking configuration. You create the configuration as always and set the worldTrackingEnabled to true. And then you can access in every frame, call back the transform of the current camera position and you can use this for whatever use case you have in mind.</p><pre><code>/ <span class="type">Enable</span> world tracking <span class="keyword">in</span> face tracking configuration
<span class="keyword">let</span> configuration = <span class="type">ARFaceTrackingConfiguration</span>()
<span class="keyword">if</span> configuration.<span class="property">supportsWorldTracking</span> {
}
configuration.<span class="property">worldTrackingEnabled</span> = <span class="keyword">true</span>
session.<span class="call">run</span>(configuration)
<span class="comment">// Access world position and orientation</span>
<span class="keyword">func</span> session(<span class="keyword">_</span> session: <span class="type">ARSession</span>, didUpdate frame: <span class="type">ARFrame</span>) {
<span class="keyword">let</span> transform = frame.<span class="property">camera</span>.<span class="property">transform</span>
...
}
</code></pre><h2>Collaborative Session</h2><p align="center">
<img src="https://multitudes.github.io/images/arkit3/12.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>Before in ARKit two you were able to create multiuser experiences but you had to save the map on one device and send it to another one in order for your user to jump to the same experience. Now with collaborative Session in ARKit 3 you are continuosly sharing the mapping information between multiple devices across the network. This allows to create Ad-hoc multi-user experiences and additionally to share ARAnchors on all devices. All those anchors are identifiable with sessions ID's on all devices At this point all coordinate systems are indipendent from each others even we share the information under the hood.</p><p>In this example two user gather and share feature points in the world space. The two maps merge into each other and will form one map only. Additionally the other user will be shown as ArParticipantAnchor too which will allows you to detect when another user is in your environment. It is not limited to two user but you can have a large amount of users in one session.</p><p>To start in code:</p><pre><code><span class="comment">// Enable a collaborative session with RealityKit
// Set up networking</span>
<span class="call">setupMultipeerConnectivity</span>()
<span class="comment">// Initialize synchronization service</span>
arView.<span class="property">scene</span>.<span class="property">synchronizationService</span> =
<span class="keyword">try</span>? <span class="type">MultipeerConnectivityService</span>(session: mcSession)
<span class="comment">// Create configuration and enable the collaboration mode</span>
<span class="keyword">let</span> configuration = <span class="type">ARWorldTrackingConfiguration</span>()
configuration.<span class="property">isCollaborationEnabled</span> = <span class="keyword">true</span>
arView.<span class="property">session</span>.<span class="call">run</span>(configuration)
</code></pre><p>When the collaboration is enabled you will be have a new method on the delegate for you where you will be receiving some data. Upon receiving that data you need to broadcast on the network to other user. Upon reception of the data on all the devices you need to update the AR session so that it knows about the new data.</p><pre><code><span class="comment">// Session callback when some collaboration data is available</span>
<span class="keyword">override func</span> session(<span class="keyword">_</span> session: <span class="type">ARSession</span>, didOutputCollaborationData data:
<span class="type">ARSession</span>.<span class="type">CollaborationData</span>) {
<span class="comment">// Send collaboration data to other participants</span>
mcSession.<span class="call">send</span>(data, toPeers: participantIds, with: .<span class="dotAccess">reliable</span>)
}
<span class="comment">// Multipeer Connectivity session delegate callback upon receiving data from peers</span>
<span class="keyword">func</span> session(<span class="keyword">_</span> session: <span class="type">MCSession</span>, didReceive data: <span class="type">Data</span>, fromPeer peerID: <span class="type">MCPeerID</span>) {
<span class="comment">// Update the session with collaboration data received from another participant</span>
<span class="keyword">let</span> collaborationData = <span class="type">ARSession</span>.<span class="type">CollaborationData</span>(data)
session.<span class="call">update</span>(from: collaborationData)
}
</code></pre><h2>AR Coaching UI</h2><p align="center">
<img src="https://multitudes.github.io/images/arkit3/13.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>When you create an AR Experience coaching is really important. You really want to guide your users, whether they are new or returning users. It is not trivial. During the process you need to react to some tracking events.<br>This year we are embedding the guidance in the UIView and we call it AR coaching view.<br>It is a built-in UI overlay you can directly embed in your applications. It guides users to good tracking experience. Provides a consistent design throughout applications. Automaticly activates and deactivates.<br><br>The setup is really simple.<br><br>You need to add it as a child of any UI view, ideally of the ARView.<br>Then connect to the ARSession to the coaching view or connect the session provider outlet to of the coaching view to the session provider itself in case of a story board. Optionally you can specify coaching goals in source code set delegates and disable some functionalities.</p><pre><code>
coachingOverlay.<span class="property">goal</span> = .<span class="dotAccess">horizontalPlane</span>
<span class="comment">// React to activation and deactivation React to relocalization abort request</span>
<span class="keyword">protocol</span> ARCoachingOverlayViewDelegate {
<span class="keyword">func</span> coachingOverlayViewWillActivate(<span class="type">ARCoachingOverlayView</span>)
<span class="keyword">func</span> coachingOverlayViewDidDeactivate(<span class="type">ARCoachingOverlayView</span>)
<span class="keyword">func</span> coachingOverlayViewDidRequestSessionReset(<span class="type">ARCoachingOverlayView</span>)
}
</code></pre><p>This year we have activated and deactivated automatically based on device capabilities Can be explicitly disabled in render options..</p><h2>Face Tracking</h2><p>In ARKit one we enabled face tracking with the ability to track one face. In ARKit 3 we enabled the ability to track three faces concurrently. Also we enabled the ability to track when a face goes out of the screen and returns back giving it the same face ID again. The ID is persistent, but when you start a new session it will be reset.</p><pre><code><span class="keyword">open class</span> ARFaceTrackingConfiguration : <span class="type">ARConfiguration</span> {
<span class="keyword">open class var</span> supportedNumberOfTrackedFaces: <span class="type">Int</span> { <span class="keyword">get</span> }
<span class="keyword">open var</span> maximumNumberOfTrackedFaces: <span class="type">Int</span>
}
</code></pre><h2>ARPositionalTrackingConfiguration</h2><p>This new tracking configuration is intended for tracking use cases. Maybe you did not need the camera backdrop to be rendered for example.<br>We can achieve a low power consumption with the ability to lower the camera resolution and frame rate.</p><h2>Improvements to the scene understanding</h2><p align="center">
<img src="https://multitudes.github.io/images/arkit3/14.png" class="pure-img-responsive" title="GetFollowers"></img>
</p><p>Image detection and tracking has been around for some time now. We can now detect up to 100 images at the same time. We also give the ability to detect the size of the printed image for example and adjust the scale accordingly. At runtime we can detect the quality of an image you are passing to ARKit when you wanna create a new reference image. We made improvement to the image detection algorythms with machine learning. In plane estimation with machine learning is more accurate even when feature points are not yet present! Last year we had five different classification, this year we added the ability to detect doors and windows. Plane estimation is really important to place contentsw in the world.</p><pre><code><span class="keyword">class</span> ARPlaneAnchor : <span class="type">ARAnchor</span> {
<span class="keyword">var</span> classification: <span class="type">ARPlaneAnchor</span>.<span class="type">Classification</span>
}
<span class="keyword">enum</span> ARPlaneAnchor.<span class="type">Classification</span> {
<span class="keyword">case</span> wall
<span class="keyword">case</span> floor
<span class="keyword">case</span> ceiling
<span class="keyword">case</span> table
<span class="keyword">case</span> seat
<span class="keyword">case</span> door
<span class="keyword">case</span> window
}
</code></pre><h2>Raycasting</h2><p>This year with the new raycasting api you can place your content more precisely. It supports every kind of surface alignment not only vertical and horizontal. Also you can track your raycast over time. As you move your device around in real time it can detect more information and place your object on top of the physical surface more accurately as the planes are evolving.</p><p>Start by performing a raycast query. Three parameters. From where to perform the raycast. In the example from the screen center. Then what you want to allow in order to place the content, and the alignment which can be vertical horizontal or any. Then pass the query to the trackedRaycast method which has a call back which allows you to react to the result.. and finally stop it when you are done.</p><pre><code><span class="comment">// Create a raycast query</span>
<span class="keyword">let</span> query = arView.<span class="call">raycastQuery</span>(from: screenCenter,
allowing: .<span class="dotAccess">estimatedPlane</span>,
alignment: .<span class="dotAccess">any</span>)
<span class="comment">// Perform the raycast</span>
<span class="keyword">let</span> raycast = session.<span class="call">trackedRaycast</span>(query) { results <span class="keyword">in</span>
<span class="comment">// Refine object position with raycast update</span>
<span class="keyword">if let</span> result = results.<span class="property">first</span> {
object.<span class="property">transform</span> = result.<span class="property">transform</span>
}
}
<span class="comment">// Stop tracking the raycast</span>
raycast.<span class="call">stop</span>()
</code></pre><h2>Visual Coherence Enhancements</h2><p>Depth of Field effect. The camera on the device always adjust to the environment so the content can now match the depth of field so the object blends perfectly in the environment.<br>Additionally when you move the camera quickly the object get a motion blur. Two new API are HDREnvironmentalTextures and Camera Grain.<br><br>With HDR or high dynamic range you can capture those highlights that make your content more vibrant.<br>Every camera produces some grain and in low light it can be a bit heavier. With this API we can apply those same grain patterns on your virtual contents so it does not stand out.</p><pre><code><span class="keyword">class</span> ARWorldTrackingConfiguration {
<span class="keyword">var</span> wantsHDREnvironmentTextures: <span class="type">Bool</span> { <span class="keyword">get set</span> }
}
<span class="keyword">class</span> ARFrame {
<span class="keyword">var</span> cameraGrainIntensity: <span class="type">Float</span> { <span class="keyword">get</span> }
<span class="keyword">var</span> cameraGrainTexture: <span class="type">MTLTexture</span>? { <span class="keyword">get</span> } }
</code></pre><h2>Record and Replay</h2><p>To develop prototype an AR experience you can record an AR sequence with the reality composer app. You can capture your environment, ARKit will save your sensor data in a movie file container so that you can take it with you and put it in xcode. Now the scheme setting in Xcode has a new seting which allows you to select that file. When that file is selected you can replay that experience. Ideal for prototyping.</p><h2>Sources</h2><p><a href="https://developer.apple.com/videos/play/wwdc2019/604/">WWDC Talk - Introducing ARKit3</a><br><br><a href="https://developer.apple.com/augmented-reality/quick-look/">https://developer.apple.com/augmented-reality/quick-look/</a><br><br><a href="https://developer.apple.com/go/?id=python-usd-library">Download usdz tools</a><br><br>
</p>]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/Intro%20to%20ARKit%20with%20a%20simple%20Unity%20tutorial</guid><title>Intro to ARKit in Unity with a simple tutorial</title><description></description><link>https://multitudes.github.io/posts/Intro%20to%20ARKit%20with%20a%20simple%20Unity%20tutorial</link><pubDate>Sat, 20 Apr 2019 09:38:00 +0200</pubDate><content:encoded><![CDATA[<blockquote><p>For the world to be interesting, you have to be manipulating it all the time <br>Brian Eno</br></p></blockquote><h2>Intro to the Apple Augmented Reality framework ARKit</h2><p>ARKit is the Augmented Reality Framework for Apple devices which enables developers to create Augmented Reality Apps for iPhone & iPad. It was introduced along with iOS 11 during WWDC 2017.<br>Anyone using an iOS device that runs on Apple's A9 to A12 Bionic processors (& running iOS 11 or above) can use ARKit apps.<br>The current version is ARKit 2.0 and a new version of ARKit should be out soon, and an update is awaited at the WWDC 2019.</p><h2>How World Tracking Works</h2><p>To create a correspondence between real and virtual spaces, ARKit uses a technique called visual-inertial odometry.<br>This process combines information from the iOS device’s motion sensing hardware with computer vision analysis of the scene visible to the device’s camera.<br>ARKit recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data.<br>The result is a high-precision model of the device’s position and motion. This process can often produce impressive accuracy, leading to realistic AR experiences. However, it relies on details of the device’s physical environment that are not always consistent or are difficult to measure in real time without some degree of error.</p><h2>Create your first ARKit application.</h2><p>In this tutorial, we are going to build a simple AR app in Unity using ARKit 2.0 image tracking feature</p><p>Pre-Requisites are:</p><ul><li>iOS 12 or later on our device</li><li>Xcode 10 or later on our Mac (free download from the App Store)</li><li>The Unity Engine (the free version is fine)</li></ul><p>In this tutorial I will show how to create an simple AR application for your iPhone or iPad in Unity and Xcode. For this you would need a picture both in digital format and printed (to be recognized by your iOS device). And three small mp4 videos in size of max 24Mb each.</p><h4>Install ARKit</h4><p>ARKit needs to be imported into Unity.</p><p>Go to this link:</p><p><a href="https://bitbucket.org/Unity-Technologies/unity-arkit-plugin/downloads/">https://bitbucket.org/Unity-Technologies/unity-arkit-plugin/downloads/</a></p><p>and download the ARKit 2.0 installer.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/13.32.57.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>The image detection feature was introduced already in ARKit 1.5 but did not track the image, it would just trigger the AR experience. Now in ARkit 2.0, you can track the position of image targets as well. If you move the image target any overlay that you have on that image target, whether it's a model or a video, will move along with the image.</p><h4>Open Unity</h4><p>Let's open the unzipped downloaded folder with Unity Hub. I am using <a href="https://unity3d.com/get-unity/download/archive">Unity 2018.3.11f1</a></p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/13.58.25.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Click on "Upgrade":</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/13.58.41.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Open the folder UnityARKitPlugin and the Examples folder. In the screenshots, your Unity might look different. Look for the assets folder in your Project Tab.</p><p>Open the folder ARKit1.5 and the UnityARImageAnchor. In this folder double click on the <code>UnityARImageAnchor</code> scene. The Scene will open in the hierarchy panel and screen.</p><p>Open the 'ReferenceImages' folder Drag the image you want to use as an image-trigger in the <code>ReferenceImages</code> folder:</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.05.40.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Now go up once in the Folder Hierarchy and click on UnityLogoReferenceImage which is part of the Unity demo. We gonna change the trigger image.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.10.02.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Drag and drop your image from the ReferenceImages folder to the Image Texture field in the Inspector.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.10.09.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>In my case, the image is called earth.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.12.49.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Right-click in the hierarchy and select “Create Empty”. You get a Game Object folder. Rename it to “parent”.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.14.21.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Right-click again in the Hierarchy tab and select 3D object and Plane</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.16.13.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>It looks like this.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.17.01.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Rename Plane to image in the hierarchy</p><p>Reset the position for Plane and parent to 0</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.17.54.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Measure the actual real dimension of your image an input it in the scale property Remember they are in meter, so the x will be ex 0.0109 and the z 0.006138</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.20.32.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>In the ReferenceImages folder right click and create a material for the card and name it image</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.25.55.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>In the inspector go to legacy shader/ diffuse</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.28.05.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Click in the right square texture.. select the image from the popup window</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.30.49.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>And from the scroll down UI/ default</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.32.07.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Now drag the material called image to the image in the hierarchy.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.32.51.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>The image selected as image trigger will be visible in the scene on the plane.</p><p>This is how it looks like for me. You can rotate or change things in the inspector if you need...</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.34.33.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>Drag the image inside the parent</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.45.06.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>And duplicate this. It is command-D on the Mac to duplicate. They will be one on top of the other so click in the Scene and drag them to where you need them.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.47.23.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>It will look like this now</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.50.14.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>The first will be our trigger image. And then there would be the three videos which will play and move along with the image. Let's attach three videos to these three new planes. Rename them as "1", "2", "3" for instance.</p><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/14.54.08.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><p>and drag the three videos to the ReferenceImages folder. And then one by one from the folder to the three planes 1,2, and 3 you just created.</p><p>If you click play you should see the three videos playing</p><p>Now drag the parent folder from the hierarchy to the assets and automatically a prefab will be created.</p><p>Delete the parent folder from the hierarchy.</p><p>Select "GenerateImageAnchorCube" in the hierarchy. You need to drag the prefab we just created to the "prefab to generate" field of "GenerateImageAnchorCube" in the inspector view.</p><h4>Build the project in Unity</h4><ul><li>go to file -> build settings,</li><li>You want to edit your bundle id if not already done. click on "Player Settings" in the "Build Settings" window ( bottom left). It will open a new window and the "Bundle Id" will be under "Other Settings"</li></ul><p align="center">
<img src="https://multitudes.github.io/images/ARKittutorialscreenshots/211.43.18.png" class="pure-img-responsive" title="ARKittutorialscreenshots"></img>
</p><ul><li>switch to the iOS platform</li><li>click on add open scenes</li><li>build and save.</li></ul><p>This will create an Xcode build.</p><h4>Open the build in Xcode</h4><p>Open the Unity project build in Xcode and go to images in the assets folder. Go to resources and click on your image. Change the size to cm and the dimension of our physical image. Select your team in General, and make sure that the deployment target is 12.0<br>Choose the device and run and install the app.</p><h3>Sources and links:</h3><p>This tutorial has been inspired by the excellent Video tutorial by <a href="https://youtu.be/POIYPIJtgtM">Parth Anand</a></p><p>About <a href="https://www.theverge.com/tldr/2017/7/26/16035376/arkit-augmented-reality-a-ha-take-on-me-video-trixi-studios">ARKit</a> - The Verge (July 2017)</p><p>Use of ARKit and Procreate App for iPad <a href="https://twitter.com/jaromvogel/status/1125401258494324736">https://twitter.com/jaromvogel</a></p><p>I’ve seen a bunch of ARKit demos that made me think “That’s very cool”. This was the first one that made me think “That’s very useful” <a href="https://twitter.com/madewitharkit">https://twitter.com/madewitharkit</a></p><h4>AR Apps for the iPhone / iPad</h4><p>The App Store offers a curated section to highlight the best AR experiences. Just tap on Apps and scroll down to Top Categories. AR Apps is at the top.</p><p>The <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwiF-4O75IviAhUwMewKHRIxAHMQFjAAegQIAhAB&url=https%3A%2F%2Fitunes.apple.com%2Fus%2Fapp%2Fikea-place%2Fid1279244498%3Fmt%3D8&usg=AOvVaw0JuWZkNMjTuCTziRMISM9K">Ikea</a> App has been one of the first demos for ARKit.</p><p>And this is the official apple page for ARKit developers: <a href="https://developer.apple.com/arkit/">https://developer.apple.com/arkit/</a></p><hr>
]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/How%20to%20Enable%20Wireless%20Building%20to%20Your%20Phone%20or%20iPad%20in%20Xcode</guid><title>How to Enable Wireless Building in Xcode</title><description></description><link>https://multitudes.github.io/posts/How%20to%20Enable%20Wireless%20Building%20to%20Your%20Phone%20or%20iPad%20in%20Xcode</link><pubDate>Tue, 9 Apr 2019 09:38:00 +0200</pubDate><content:encoded><![CDATA[<p>Many folks dont know or forgot about this interesting feature of Xcode: Wireless Building on device.</p><p>No need for cables, or better said, you will need a cable still just once to set up the wireless syncing.<br>I am using an iPad with a new Mac Mini set up as headless server (without monitor which is being accessed via a MacBook Pro on the same local network).<br>Using Xcode on the Mac Mini is helping me to make use of faster compiling speeds because of his i7 latest generation "Coffee Lake" Intel processor but I would need to connect my device to the Mini which is in another room.<br>Not only that but I want to be able to work on my old Mac Book most of the time with my iPhone and iPad next to me without to be physically attached to the Mini.<br><br>The solution is pretty amazing and I will show you how to do it. It is very easy, and it is quite magical to see your iOS devices suddenly waking up and playing your app.<br>Go in Xcode to the <code>Window</code> menu on the top menu bar. There is a menu item called <code>Devices and Simulators</code>.<br><br></p><p align="center">
<img src="https://multitudes.github.io/images/XcodeWirelessSync/2.png" class="pure-img-responsive" title="Wireless Building in Xcode"></img>
</p><p>Click on it and you will see the following window opening. There is nothing inside!<br><br></p><p align="center">
<img src="https://multitudes.github.io/images/XcodeWirelessSync/1.png" class="pure-img-responsive" title="Wireless Building in Xcode"></img>
</p><p>The reason is this is that, for the first sync you will need a cable. So connect the device and try again.<br>This time you will see the device showing up!<br><br><br></p><p align="center">
<img src="https://multitudes.github.io/images/XcodeWirelessSync/3.png" class="pure-img-responsive" title="Wireless Building in Xcode"></img>
</p><p>You need to tick the box <code>Connect via network</code> like this:<br><br><br></p><p align="center">
<img src="https://multitudes.github.io/images/XcodeWirelessSync/4.png" class="pure-img-responsive" title="Wireless Building in Xcode"></img>
</p><p>Now you can disconnect the cable and you will be able to build your app on your iOS device without connecting it with a cable! Enjoy.</p>]]></content:encoded></item><item><guid isPermaLink="true">https://multitudes.github.io/posts/Regular%20Expressions%20Primer</guid><title>Regular Expressions Primer</title><description></description><link>https://multitudes.github.io/posts/Regular%20Expressions%20Primer</link><pubDate>Thu, 17 Jan 2019 10:41:00 +0100</pubDate><content:encoded><![CDATA[<h2>Introduction</h2><p>Many people use different names to refer to Regular Expressions, such as Regexp, Regex, ASCII puke(!).</p><p>Regex are a way to express a set of strings. They can be used to 'scrub' website to extract links, informations, titles.. They are just simply a series of codes that you use to define exactly what text you are looking for. And text can be numbers letters punctuation etc.</p><p>This code is used to match patterns. lets see an example. I want to find all numbers, one to five digit long followed by a space and the words "Hello World".</p><p>For example: <code>12345 Hello World.</code> would be defined by <code>\d{1,5}\s\w+\s\w+\.</code></p><p>We define the pattern with the following syntax. The slash is delimiting the start of a regular expression.</p><p><code>\d{1,5}</code></p><p>So this means: A digit between 1 to 5 digits in length, whenever you use these curly brackets, you are saying that you expect between n to m digits</p><p>Then I see in our example, the numbers are followed by a space. The Regular expression for a space is</p><p><code>\s</code></p><p>After this you expect a series of character which will be from one to multiple letters in length. This <code>w</code> represents anything which is a character, and the <code>+</code> means one or more.</p><p><code>\w+</code></p><p>again followed by space and a word</p><p><code>\s\w+</code></p><p>then the point. The point can be interpreted in regex to mean any character except a newline. So I need to put a backslash.</p><p><code>\.</code></p><p>We now put all this together:</p><p><code>\d{1,5}\s\w+\s\w+\.</code></p><p>Nice isn't it?</p><h4>A bit of theory</h4><p>These are the common used parameters:</p><p><code>\d</code> is any number (digit)</p><p><code>\D</code> with the capital D is anything but a number (digit)</p><p><code>\s</code> A space</p><p><code>\S</code> Anything but a space</p><p><code>\w</code> Any character</p><p><code>\W</code> Anything but a character</p><p><code>.</code> matches any character except a line break</p><p><code>\b</code> match for a space which precedes or follows a whole word</p><p><code>?</code> zero or one repetition of the code which precedes it</p><p><code>*</code> One or more repetition</p><p><code>{n}</code> If you know exactly how many repetitions, insert the amount in curly brackets</p><p><code>{m,n}</code> Here means m to n repetitions as seen above</p><p><code>\e </code> escaped whitespace</p><p><code>\f </code> form feed</p><p><code>\n</code> newline</p><p><code>\r</code> carriage return</p><p><code>\t </code> horizontal tab</p><p>Using curly brackets to find commonly mispelled words. For example, tomorrow can be mispelled as tommorow, tommorrow..</p><p>We can find them in this way:</p><p><code>\tomm{0,1}orr{0,1}ow</code></p><p><code>[a-z] </code> any lower case letter</p><p><code>[A-Z]</code> any upper case letter</p><p><code>[0-9]</code> any number</p><p><code>[a-cC-R3-4]</code> any letter from a to c or uppercase letter from C to R or number from 3-4</p><h4>An example</h4><p>Lets look for the name Catherine in our text. But we want to find her last name as well.</p><p>We will use the OR operator <code>|</code> because we want to find the name but also the shorter verions of her name such of Cath or Cathy, followed by a trailing space and another word of one of more characters:</p><p><code>\Cat(herine|h)\b\w+\b</code></p><p>This is the same as</p><p><code>\(Catherine|Cath)\b\w+\b</code></p><p>if I add <code>\1</code> at the end, it means take what it is in the parentheses and redo it again</p><p><code>\(Catherine|Cath)\b\w+\b\1</code> same as <code>\(Catherine|Cath)\b\w+\b\(Catherine|Cath)</code></p><p>The caret <code>^</code> means at the beginning of the text. So to find all sentences starting with <code>Dog</code></p><p><code>(^Dog\s.+$)</code><br><br>This will find all lines starting with dog followed by a space and any or more chars (but no newline) and ending ($).</p><h2>Regex with Python</h2><p>So now we apply what we have learned to use it in python</p><pre><code>
# <span class="keyword">import</span> our regex library
<span class="keyword">import</span> re
# lets take a random text
# <span class="keyword">if</span> we have a file we can use <span class="call">open</span>('filename.<span class="property">txt</span>')
text = <span class="string">"aAsdfgh$%#n Cat iuhi"</span>
# define our regular expression <span class="keyword">in</span> the function compile
expression = re.<span class="call">compile</span>('<span class="type">Cat</span>')
# tell python to look <span class="keyword">for</span> it
pattern = re.<span class="call">search</span>(expression, text)
# print the result using the group function
<span class="call">print</span>(pattern.<span class="call">group</span>())
# <span class="type">To</span> find all occurrences there <span class="keyword">is</span> the method findall
# lets print all the letters and only the letters <span class="keyword">in</span> our text:
strToSearch = <span class="string">""</span>
<span class="keyword">for</span> line <span class="keyword">in</span> text:
strToSearch += text
<span class="call">print</span>(strToSearch)
expression = re.<span class="call">compile</span>('[a-z]',re.<span class="type">IGNORECASE</span>)
pattern2 = re.<span class="call">findall</span>(expression,text)
<span class="keyword">for</span> i <span class="keyword">in</span> pattern2:
<span class="call">print</span>(i,end='')
</code></pre><h2>Example from fastai DL course v3</h2><p>Lets see this example as seen on fastai deep learning for coders version 3. We have a set of image files in a folder with names like</p><p>'data/oxford-iiit-pet/images/american<em>bulldog</em>146.jpg'</p><p>The label for that particular picture is in the name of the file. Lets see how we can extract it with a regex pattern:</p><p><code>/([^/]+)_\d+.jpg$</code></p><p><code>/</code> starts the regex</p><p><code>()</code> in the parentheses is what we are looking for, a greoup of characters defined as:</p><p><code>[^/]+</code> the <code>^</code> is a negation, so it says no forward slash and <code>+</code> arbitrarily long so <code>([^/]+)</code> is looking for a group of characters that do not contain forward slashes and are arbitrarily long.</p><p><code>_</code> it is literally expecting an underscore character here</p><p>The RegEx <code>\d</code> refers to numerical digits and the plus <code>+</code> sign that comes after it means that there may be arbitrarily many digits. This looks for the numerical ID of the images.</p><p><code>.jpg$</code> The dollar sign at the end tells that we are matching only files which end with <code>.jpg</code></p><p>The python code to extract the name would be:</p><pre><code><span class="keyword">import</span> re
string = 'data/oxford-iiit-pet/images/american_bulldog_146.<span class="property">jpg</span>'
pat = r'([^/]+)<span class="keyword">_</span>\d+.jpg$'
pat = re.<span class="call">compile</span>(pat)
<span class="call">print</span>(pat.<span class="call">search</span>(string).<span class="call">group</span>(<span class="number">1</span>))
# we <span class="keyword">get</span> this result and not the whole string,
# because on the previous expression, we requested 'group <span class="number">1</span>' which <span class="keyword">is</span>
# that part of the regex with the parentheses
>american_bulldog
</code></pre><h2>Regex in HTML5</h2><p>Regex can be used in HTML5 for form validation with the pattern attribute.</p><pre><code><input name=<span class="string">"zip"</span> pattern=<span class="string">"\d{5}"</span> />
</code></pre><h2>In the command line</h2><p><code>cd</code> to the folder where you are looking for a specific file. Suppose you are looking for all files like with a name like aaa_431x303.jpg We can use find with -regex and the pattern to fild them:</p><pre><code>find -<span class="type">E</span> . -regex '.*[<span class="number">0</span>-<span class="number">9</span>]{<span class="number">2</span>,<span class="number">4</span>}x[<span class="number">0</span>-<span class="number">9</span>]{<span class="number">2</span>,<span class="number">4</span>}\.<span class="property">jpg</span>'
</code></pre><h2>Resources</h2><ul><li>The excellent <a href="https://leaverou.github.io/regexplained/">Regex Playground</a> of Lea Verou</li><li>Tutorials at <a href="https://regexr.com/">RegExr</a></li><li>The Video tutorials <a href="https://www.youtube.com/watch?v=DRR9fOXkfRE&feature=youtu.be">Understanding Regular Expressions (12 minute video)</a></li></ul>]]></content:encoded></item></channel></rss>