Welcome to my blog! This is a sample post demonstrating that LaTeX math works perfectly here.
Inline Math
Einstein’s famous equation: E=mc2
The cross-entropy loss for binary classification: L=−[ylog(y^)+(1−y)log(1−y^)]
Block Math
The softmax function used in classification:
σ(z)i=∑j=1Kezjezifor i=1,…,K
Something more complex is here:
L=21gμν∇μϕ∇νϕ−21m2ϕ2−4!λϕ4+ψˉ(iγμDμ−m)ψ−41FμνFμν+∫(2π)4d4kk2−m2+iϵe−ikx+n=1∑∞n!(−1)n(∫d4xLint)n
The attention mechanism from Transformers:
Attention(Q,K,V)=softmax(dkQKT)V
Code Blocks
import torch
import torch.nn.functional as F
def softmax(x):
return F.softmax(x, dim=-1)
Lists, links, and more
- Standard markdown works
- Links work
- Bold and italic work
- All the usual formatting
That’s it! Just create new .md files in src/content/blog/ to add new posts.