guide:81599a4c41: Difference between revisions
From Stochiki
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"><math> | |||
\newcommand{\indep}[0]{\ensuremath{\perp\!\!\!\perp}} | |||
\newcommand{\dpartial}[2]{\frac{\partial #1}{\partial #2}} | |||
\newcommand{\abs}[1]{\left| #1 \right|} | |||
\newcommand\autoop{\left(} | |||
\newcommand\autocp{\right)} | |||
\newcommand\autoob{\left[} | |||
\newcommand\autocb{\right]} | |||
\newcommand{\vecbr}[1]{\langle #1 \rangle} | |||
\newcommand{\ui}{\hat{\imath}} | |||
\newcommand{\uj}{\hat{\jmath}} | |||
\newcommand{\uk}{\hat{k}} | |||
\newcommand{\V}{\vec{V}} | |||
\newcommand{\half}[1]{\frac{#1}{2}} | |||
\newcommand{\recip}[1]{\frac{1}{#1}} | |||
\newcommand{\invsqrt}[1]{\recip{\sqrt{#1}}} | |||
\newcommand{\halfpi}{\half{\pi}} | |||
\newcommand{\windbar}[2]{\Big|_{#1}^{#2}} | |||
\newcommand{\rightinfwindbar}[0]{\Big|_{0}^\infty} | |||
\newcommand{\leftinfwindbar}[0]{\Big|_{-\infty}^0} | |||
\newcommand{\state}[1]{\large\protect\textcircled{\textbf{\small#1}}} | |||
\newcommand{\shrule}{\\ \centerline{\rule{13cm}{0.4pt}}} | |||
\newcommand{\tbra}[1]{$\bra{#1}$} | |||
\newcommand{\tket}[1]{$\ket{#1}$} | |||
\newcommand{\tbraket}[2]{$\braket{1}{2}$} | |||
\newcommand{\infint}[0]{\int_{-\infty}^{\infty}} | |||
\newcommand{\rightinfint}[0]{\int_0^\infty} | |||
\newcommand{\leftinfint}[0]{\int_{-\infty}^0} | |||
\newcommand{\wavefuncint}[1]{\infint|#1|^2} | |||
\newcommand{\ham}[0]{\hat{H}} | |||
\newcommand{\mathds}{\mathbb}</math></div> | |||
In this chapter, we have learned the following concepts: | |||
<ul><li> Out-of-distribution generalization and its impossibility | |||
</li> | |||
<li> Invariance as a core principle behind out-of-distribution generalization | |||
</li> | |||
<li> Preference modeling for training a language model, as causal learning | |||
</li> | |||
</ul> | |||
The goal of this chapter has been to introduce students to the concept of learning beyond independently-and-identically-distribution settings, by relying on concepts and frameworks from causal inference and more broadly causality. The topics covered in this chapter are sometimes referred to as '' causal machine learning''~<ref name="kaddour2022causal">{{cite journal||last1=Kaddour|first1=Jean|last2=Lynch|first2=Aengus|last3=Liu|first3=Qi|last4=Kusner|first4=Matt J|last5=Silva|first5=Ricardo|journal=arXiv preprint arXiv:2206.15475|year=2022|title=Causal machine learning: A survey and open problems}}</ref>. | |||
==General references== | |||
{{cite arXiv|last1=Cho|first1=Kyunghyun|year=2024|title=A Brief Introduction to Causal Inference in Machine Learning|eprint=2405.08793|class=cs.LG}} | |||
==References== | |||
{{reflist}} |
Latest revision as of 23:53, 18 May 2024
[math]
\newcommand{\indep}[0]{\ensuremath{\perp\!\!\!\perp}}
\newcommand{\dpartial}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand\autoop{\left(}
\newcommand\autocp{\right)}
\newcommand\autoob{\left[}
\newcommand\autocb{\right]}
\newcommand{\vecbr}[1]{\langle #1 \rangle}
\newcommand{\ui}{\hat{\imath}}
\newcommand{\uj}{\hat{\jmath}}
\newcommand{\uk}{\hat{k}}
\newcommand{\V}{\vec{V}}
\newcommand{\half}[1]{\frac{#1}{2}}
\newcommand{\recip}[1]{\frac{1}{#1}}
\newcommand{\invsqrt}[1]{\recip{\sqrt{#1}}}
\newcommand{\halfpi}{\half{\pi}}
\newcommand{\windbar}[2]{\Big|_{#1}^{#2}}
\newcommand{\rightinfwindbar}[0]{\Big|_{0}^\infty}
\newcommand{\leftinfwindbar}[0]{\Big|_{-\infty}^0}
\newcommand{\state}[1]{\large\protect\textcircled{\textbf{\small#1}}}
\newcommand{\shrule}{\\ \centerline{\rule{13cm}{0.4pt}}}
\newcommand{\tbra}[1]{$\bra{#1}$}
\newcommand{\tket}[1]{$\ket{#1}$}
\newcommand{\tbraket}[2]{$\braket{1}{2}$}
\newcommand{\infint}[0]{\int_{-\infty}^{\infty}}
\newcommand{\rightinfint}[0]{\int_0^\infty}
\newcommand{\leftinfint}[0]{\int_{-\infty}^0}
\newcommand{\wavefuncint}[1]{\infint|#1|^2}
\newcommand{\ham}[0]{\hat{H}}
\newcommand{\mathds}{\mathbb}[/math]
In this chapter, we have learned the following concepts:
- Out-of-distribution generalization and its impossibility
- Invariance as a core principle behind out-of-distribution generalization
- Preference modeling for training a language model, as causal learning
The goal of this chapter has been to introduce students to the concept of learning beyond independently-and-identically-distribution settings, by relying on concepts and frameworks from causal inference and more broadly causality. The topics covered in this chapter are sometimes referred to as causal machine learning~[1].
General references
Cho, Kyunghyun (2024). "A Brief Introduction to Causal Inference in Machine Learning". arXiv:2405.08793 [cs.LG].