Revision 877

trunk/docs/manuals/admb-re/admbre.tex (revision 877)
739 739

  
740 740
PROCEDURE_SECTION
741 741

  
742
  int k=0;
742
  int k=1;
743 743
  L(1,1) = 1.0;
744 744
  for(i=2;i<=4;i++)
745 745
  {
......
872 872
\section{Built-in data likelihoods}
873 873

  
874 874
In the simple \texttt{simple.tpl}, the mathematical expressions for all
875
log-likehood contributions where written out in full detail. You may have hoped
875
log-likelihood contributions where written out in full detail. You may have hoped
876 876
that for the most common probability distributions, there were functions written
877 877
so that you would not have to remember or look up their log-likelihood
878 878
expressions. If your density is among those given in
......
1502 1502
\index{Gauss-Hermite quadrature}
1503 1503

  
1504 1504
In the situation where the model is separable of type ``Block diagonal
1505
Hessian,'' with only a single random effect in each block (see
1506
Section~\ref{separability}), Gauss-Hermite quadrature is available as an option
1505
Hessian,'' (see Section~\ref{separability}), 
1506
Gauss-Hermite quadrature is available as an option
1507 1507
to the Laplace approximation and to the \texttt{-is} (importance sampling)
1508 1508
option. It is invoked with command line option \texttt{-gh N}, where \texttt{N}
1509 1509
is the number of quadrature points.
1510 1510

  
1511
\subsection{Frequency weighting for multinomial likelihoods}
1511
%\subsection{Frequency weighting for multinomial likelihoods}
1512
%
1513
%In situations where the response variable only can take on a finite number of
1514
%different values, it is possibly to reduce the computational burden enormously.
1515
%As an example, consider a situation where observation $y_{i}$ is binomially
1516
%distributed with parameters $N=2$ and $p_{i}$. Assume that
1517
%\begin{equation*}
1518
%  p_{i}=\frac{\exp (\mu +u_{i})}{1+\exp (\mu +u_{i})},
1519
%\end{equation*}
1520
%where $\mu$ is a parameter and $u_{i}\sim N(0,\sigma ^{2})$ is a random effect.
1521
%For independent observations $y_1,\ldots,y_n$, the log-likelihood function for
1522
%the parameter $\theta =(\mu ,\sigma )$ can be written as
1523
%\begin{equation}
1524
%  l(\theta )=\sum_{i=1}^{n}\log \bigl[\, p(x_{i};\theta )\bigr] .
1525
%\end{equation}
1526
%In \scAR, $p(x_{i};\theta)$ is approximated using the Laplace approximation.
1527
%However, since $y_i$ only can take the values~$0$, $1$, and~$2$, we can rewrite
1528
%the log-likelihood~as
1529
%$$
1530
%l(\theta )=\sum_{j=0}^{2}n_{j}\log \left[ p(j;\theta )\right],
1531
%$$
1532
%where $n_j$ is the number $y_i$s being equal to $j$. Still, the Laplace
1533
%approximation must be used to approximate $p(j;\theta )$, but now only for
1534
%$j=0,1,2$, as opposed to $n$~times above. For large~$n$, this can give large a
1535
%large reduction in computing time.
1536
%
1537
%To implement the weighted log-likelihood~(\ref{l_w}), we define a weight vector
1538
%$(w_1,w_2,w_3)=(n_{0},n_{1},n_{2})$. To read the weights from file, and to tell
1539
%\scAR\ that~\texttt{w} is a weights vector, the following code is used:
1540
%\begin{lstlisting}
1541
%DATA_SECTION
1542
% init_vector w(1,3)
1543
%
1544
%PARAMETER_SECTION
1545
% !! set_multinomial_weights(w);
1546
%\end{lstlisting}
1547
%In addition, it is necessary to explicitly multiply the likelihood contributions
1548
%in~(\ref{l_w}) by~$w$. The program must be written with
1549
%\texttt{SEPARABLE\_FUNCTION}, as explained in Section~\ref{sec:nested}. For the
1550
%likelihood~(\ref{l_w}), the \texttt{SEPARABLE\_FUNCTION} will be invoked three
1551
%times.
1552
%
1553
%See a full example
1554
%\href{http://www.otter-rsch.com/admbre/examples/weights/weights.html}{here}.
1512 1555

  
1513
In situations where the response variable only can take on a finite number of
1514
different values, it is possibly to reduce the computational burden enormously.
1515
As an example, consider a situation where observation $y_{i}$ is binomially
1516
distributed with parameters $N=2$ and $p_{i}$. Assume that
1517
\begin{equation*}
1518
  p_{i}=\frac{\exp (\mu +u_{i})}{1+\exp (\mu +u_{i})},
1519
\end{equation*}
1520
where $\mu$ is a parameter and $u_{i}\sim N(0,\sigma ^{2})$ is a random effect.
1521
For independent observations $y_1,\ldots,y_n$, the log-likelihood function for
1522
the parameter $\theta =(\mu ,\sigma )$ can be written as
1523
\begin{equation}
1524
  l(\theta )=\sum_{i=1}^{n}\log \bigl[\, p(x_{i};\theta )\bigr] .
1525
\end{equation}
1526
In \scAR, $p(x_{i};\theta)$ is approximated using the Laplace approximation.
1527
However, since $y_i$ only can take the values~$0$, $1$, and~$2$, we can rewrite
1528
the log-likelihood~as
1529
$$
1530
l(\theta )=\sum_{j=0}^{2}n_{j}\log \left[ p(j;\theta )\right],
1531
$$
1532
where $n_j$ is the number $y_i$s being equal to $j$. Still, the Laplace
1533
approximation must be used to approximate $p(j;\theta )$, but now only for
... This diff was truncated because it exceeds the maximum size that can be displayed.

Also available in: Unified diff