Skip to content

Commit 22fe292

Browse files
committed
L23 cosmetic
1 parent 8f8bf06 commit 22fe292

File tree

3 files changed

+22
-22
lines changed

3 files changed

+22
-22
lines changed

lectures/459.bib

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1273,7 +1273,7 @@ @inproceedings{coz
12731273
}
12741274

12751275
@article{gptforgood,
1276-
title = {ChatGPT for good? On opportunities and challenges of large language models for education},
1276+
title = {{ChatGPT} for good? On opportunities and challenges of large language models for education},
12771277
journal = {Learning and Individual Differences},
12781278
volume = {103},
12791279
pages = {102274},

lectures/L23-slides.tex

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ \part{Password Cracking}
256256

257257
You do not even need to make them yourself anymore (if you don't want) because you can download them on the internet... they are not hard to find.
258258

259-
They are large, yes, but in the 25 - 900 GB range.
259+
They are large, yes, but in the 25--900 GB range.
260260

261261

262262
\end{frame}
@@ -297,7 +297,7 @@ \part{Password Cracking}
297297

298298
\begin{itemize}
299299
\item Table generation on a GTX295 core for MD5 proceeds at around 430M links/sec.
300-
\item Cracking a password 'K\#n\&r4Z': real: 1m51.962s, user: 1m4.740s. sys: 0m15.320s
300+
\item Cracking a password 'K\#n\&r4Z': \\ \qquad real: 1m51.962s, user: 1m4.740s. sys: 0m15.320s
301301
\end{itemize}
302302

303303
Yikes.
@@ -426,7 +426,7 @@ \part{Large Language Models}
426426

427427
Such large language models have existed before, but ChatGPT ended up a hit because it's pretty good at being ``conversational''.
428428

429-
This is referred to as Natural Language Processing (NLP).
429+
This is within the ambit of Natural Language Processing (NLP).
430430

431431
\end{frame}
432432

@@ -700,7 +700,7 @@ \part{Large Language Models}
700700
\begin{frame}
701701
\frametitle{Next Idea}
702702

703-
We seem to be memory limited -- let's see what we can do?
703+
We seem to be memory limited---let's see what we can do?
704704

705705
First idea: \alert{gradient accumulation}.
706706

@@ -731,7 +731,7 @@ \part{Large Language Models}
731731
\begin{frame}
732732
\frametitle{I got Suspicious}
733733

734-
I got suspicious about the 128 dropoff in memory usage and it made me think about other indicators -- is it getting worse somehow?
734+
I got suspicious about the 128 dropoff in memory usage and it made me think about other indicators---is it getting worse somehow?
735735

736736
The output talks about training loss...
737737

@@ -791,7 +791,7 @@ \part{Large Language Models}
791791
\end{tabular}
792792
\end{center}
793793

794-
Interesting results -- maybe a little concerning?
794+
Interesting results---maybe a little concerning?
795795

796796
\end{frame}
797797

0 commit comments

Comments
 (0)