Naar navigatie springen Naar zoeken springen
<Londenp> Hi. Question for someone who knows the Parser functions: Is there a way how I can find out with parser language if a certain article is in a category.
Something like if AAA is in CAT:BBB then display a certain image?
<Splarka> probably need DPL for that (depending on what you wanna do exactly)
<Londenp> In fact it is for Wikibooks
<Londenp> We have categorized the books in fases 0 to 4
<Splarka> or in JS or CSS
<Londenp> I would like to show on the front page a bookname together with a symbol for the development states
<Londenp> I couldn't find a solution with magic words and a parser
<Splarka> Londenp: well, what categorizes it? a template? or the [[category]] added directly?
<Londenp> with a template
<Splarka> so, like {{devstate|3}} ?
<Splarka> you can use the template to categorize, and generate the image
<Splarka> say you had {{devstate|3}} for example
<Splarka> [[Category:Development state {{{1|0}}}]]
<Splarka> {{#switch:{{{1|none}}}
<Londenp> OK
<Duesentrieb> Splarka: but he wants the image not on the page that is categorized, but on a pages that links to the page that is categorized
<Splarka> |0=[[Image:Devstate_0.png|40px]]
<Splarka> |1=[[Image:Devstate_1.png|40px]]
<Splarka> etc
<Splarka> really? hmmmm'
<Londenp> Duesentrieb exactly
<Splarka> DPL or javascript
<Duesentrieb> don't know if something like that exists.in any case, it would be tricky with parser cache
<Splarka> javascript could do it in a quick query to the API
<Londenp> We have Dynamicpagelist installed
<Duesentrieb> disabling it is the obvious solution, that would suck for the front page though
<Splarka> (but on the front page that would be messy)
<Splarka> DPL should be able to do it then
<Londenp> not DPL alas
<Splarka> which wikibooks?
<Londenp> nl
<Londenp> Rereading the remark from Duesentrieb it is still a little different
<Splarka> useful how it doesn't say what version of dpl it has
* Splarka grumbles
<Londenp> it is the old old one
<Londenp> http://www.mediawiki.org/wiki/Extension:DynamicPageList/old This one
<Splarka> drat, don't think you can do what you want with it then
<Londenp> That is what I thought
<Londenp> It is very useful though, but the most modern version of DPL is much better
<Londenp> but I was not allowed to let that be installed on nl.wikibooks.org
<Londenp> might try again thoug on bugzilla
<Londenp> So my idea is this {{template|bookname}} for each book
<Londenp> this will show a link to the book and display a image for the status of development of this book
<Londenp> this will automatically be updated if the status is changed
<Splarka> only way I can think of is the API, but on the mainpage that would be a lot of queries to it (and it isn't cached)
<Splarka> or a custom extension
<Londenp> The information for the status of the book is with a DPL-thing in a hidden category and the template would look in which
hidden category for the status it is and then display the according image
<Splarka> OR.. you could create subpages for each book, that indicate the status
<Splarka> such as {{Somebook/status 1}}
<Splarka> and use #ifexist
<Splarka> but that'd be... messy
<Londenp> That is a good idea
<Londenp> Sparkla Thanks I will try that then
<Splarka> uhoh
<Splarka> don't you hate it when someone is threatening to jump off a bridge, and you say "go for it" sarcastically.. and then they do...
<Splarka> Londenp: you could also create one subpage, like [[Bookname/status]]
<Londenp> You might give me another choice :-=)
<Splarka> and have that /status contain the image that you want to transclude
<Splarka> and then always transclude [[{{{1}}}/status]] if it exists
<Splarka> that way you don't have to create and delete pages continually
<Londenp> yep
<Splarka> {{#ifexist:{{{1}}}/status|{{{{{1}}}/status}}|[[Image:Nostatus.gif]]}}
<Londenp> In fact we have something like an Infobox with the book that contains all information already
* Splarka nods
<Londenp> I just don't know how to substract that
<Splarka> heh
<Splarka> well, then...
<Splarka> what if /status only contained the number
<Splarka> 0 to 4
<Splarka> and on the infobox:
<Splarka> [[Category:Development status {{{{{FULLPAGENAME}}/status}}]]
<Splarka> and on the main page:
<Splarka> {{#ifexist:{{{1}}}/status|[[Image:Status_{{{{{1}}}/status}}.gif]]|[[Image:Nostatus.gif]]}}
<Splarka> then you just update that /status page, and both the main page and the infobox are changed
<Londenp> OK, looks good
<Londenp> but I need a little time to understand this (not being educated)
* Splarka nods
<Londenp> Sparkla thanks, I am going to study your proposals and work it out. When I have some more questions I will return.
<Splarka> thanks for the warning ^_^


test 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890


## Citaten Papier

• The paper burns, but the words fly away - Ben Joseph Akiba
• What the world really needs is more love and less paper work - Pearl Bailey
• Anything on paper is obsolete - Craig Bruce
• This paper will no doubt be found interesting by those who take an interest in it - John Dalton
• When the weight of the paper equals the weight of the airplane, only then you can go flying - Donald Douglas
• When you sell a man a book you don't sell him just 12 ounces of paper and ink and glue-you sell him a whole new life - Christopher Morley
• A specification that will not fit on one page of 8.5x11 inch paper cannot be understood - Mark Ardis
• A child's life is like a piece of paper on which every person leaves a mark - Chinese Proverb

1.7 13/03/2007 Ignore non-list lines when generating blacklist
1.6 15/01/2007 Support multiple-language translations
1.5 16/12/2006 Don't block all usernames when the blacklist contains blank lines
Use Unicode-friendly regular expressions
Don't show errors when the blacklist contains only comments
(above fixes from Brion Vibber)
1.4 19/06/2006 Fix fatal error
1.3 06/07/2006 Cache blacklist in memcached or similar, if available
1.2 25/04/2006 Performance rewrite
Allow users with the uboverride permission to pass the blacklist
1.1 08/03/2006 Make compatible with MediaWiki 1.5.8
Allow commenting out lines with #
1.0 09/01/2006 Initial version

## Boekenplank en boekenkast

 [[{{{Vorighoofdstuk}}}|Vorig hoofdstuk]] [[{{{Vorigepagina}}}|Vorige pagina]] [[{{{Volgendepagina}}}|Volgende pagina]] [[{{{Volgend hoofdstuk}}}|Volgend hoofdstuk]] [[{{{Inhoudsopgave}}}|Inhoudsopgave]] [[{{{boek}}}|Papier]]

## math

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {E} [({\hat {\theta }}-\theta )^{2}]}$

Normal multiplication turns this into:

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {E} [{\hat {\theta }}^{2}-2{\hat {\theta }}{\theta }+\theta ^{2}]}$

The expectation of theta is exactly theta, so you can write it like that:

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {E} [{\hat {\theta }}^{2}]-2{\theta }\operatorname {E} [{\hat {\theta }}]+\theta ^{2}}$

Add two parameters: one with a negative sign and the other with a positive sign (the sum of both is 0, but you can use that in additional calculations)

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {E} [{\hat {\theta }}^{2}]-\operatorname {E} [{\hat {\theta }}]^{2}+\operatorname {E} [{\hat {\theta }}]^{2}-2{\theta }\operatorname {E} [{\hat {\theta }}]+\theta ^{2}}$

Now the second part is the usual (a-b)^2, which can be written as a^2 - 2ab + b^2. You just work the other way around.

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {E} [{\hat {\theta }}^{2}]-\operatorname {E} [{\hat {\theta }}]^{2}+(\operatorname {E} [{\hat {\theta }}]-\theta )^{2}}$

And the first part is the variance of theta-hat and the second part is the bias. This can be written as:

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {V} ({\hat {\theta }})+(bias({\hat {\theta }}))^{2}}$

Now when theta-hat is unbiassed this means that the expectation of theta-hat is equal to theta and therefore the bias is zero.
And that means that the MSE of theta-hat is exactly the variance of theta-hat.

If we take for theta the mu and for theta-hat the X-bar, you will use the normal variance for a n-sample from a population (X) with mean μ and variance σ2 you will get:

${\displaystyle \operatorname {MSE} =\left({\frac {\sigma }{\sqrt {n}}}\right)^{2}}$

Another calculation will be as follows: We start again with:

${\displaystyle \operatorname {MSE} ({\hat {\theta }})=\operatorname {E} [({\hat {\theta }}-\theta )^{2}]}$

Now we replace theta-hat with X-bar and theta with mu and you get:

${\displaystyle \operatorname {MSE} ({\bar {X}})=\operatorname {E} [({\bar {X}}-\mu )^{2}]}$

We will replace X-bar in this equation with the following:

${\displaystyle {\bar {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}={\frac {1}{n}}(X_{1}+\cdots +X_{n}).}$

So you get:

${\displaystyle \operatorname {MSE} ({\bar {X}})=\operatorname {E} [{\frac {1}{n}}(X_{1}+\cdots +X_{n})-\mu )^{2}]}$

This you can write differently according to:

${\displaystyle \operatorname {MSE} ({\bar {X}})=\operatorname {E} [{\frac {X_{1}+\cdots +X_{n}}{n}}-\mu )^{2}]}$

and

${\displaystyle \operatorname {MSE} ({\bar {X}})=\operatorname {E} [{\frac {X_{1}+\cdots +X_{n}}{n}}-{\frac {n}{n}}\mu )^{2}]}$

${\displaystyle \operatorname {MSE} ({\bar {X}})=\operatorname {E} [{\frac {X_{1}+\cdots +X_{n}-{n}\mu )}{n}}^{2}]}$

We take the n in the denominator out and this becomes squarred:

${\displaystyle \operatorname {MSE} ({\bar {X}})={\frac {1}{n^{2}}}\operatorname {E} [X_{1}+\cdots +X_{n}-{n}\mu )^{2}]}$

which turns into:

${\displaystyle \operatorname {MSE} ({\bar {X}})={\frac {1}{n^{2}}}\operatorname {E} [(X_{1}-\mu )+\cdots +(X_{n}-\mu ))^{2}]}$

Now the expectation is redistributed in the equation:

${\displaystyle \operatorname {MSE} ({\bar {X}})={\frac {1}{n^{2}}}\operatorname {E} [(X_{1}-\mu )^{2}]+\cdots +{\frac {1}{n^{2}}}\operatorname {E} [(X_{n}-\mu )^{2}]}$

We know that the definition for sigma is

${\displaystyle \operatorname {E} [(X_{1}-\mu )^{2}]=\sigma _{1}^{2}}$

so you get:

${\displaystyle \operatorname {MSE} ({\bar {X}})={\frac {1}{n^{2}}}\sigma _{1}^{2}+\cdots +{\frac {1}{n^{2}}}\sigma _{n}^{2}={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}}$

This is the variance of the unbiassed sample distribution of the mean

## regression

${\displaystyle {\hat {y}}=0.0027+0.7916{x}}$
${\displaystyle {\hat {y}}=49.48+1.96{x}}$
${\displaystyle y_{i}=\beta x_{i}+\varepsilon _{i}\,}$

The sum of squares to be minimized is

${\displaystyle S=\sum _{i=1}^{n}\left(Y_{i}-{\hat {\beta }}x_{i}\right)^{2}}$

so

${\displaystyle S=\sum _{i=1}^{n}\left(Y_{i}^{2}-2Y_{i}{\hat {\beta }}x_{i}\ +{\hat {\beta }}x_{i}^{2}\right)}$

So we do some derivation and you get for minimization the derivative is set to 0:

${\displaystyle {\frac {\delta }{\delta {\beta }}}\sum _{i=1}^{n}\left(Y_{i}^{2}-2Y_{i}{\hat {\beta }}x_{i}\ +{\hat {\beta }}x_{i}^{2}\right)=\sum _{i=1}^{n}-2Y_{i}x_{i}+2{\hat {\beta }}x_{i}^{2}=0}$

Divide both sides by -2 and redistribute the summation any will get:

${\displaystyle \sum _{i=1}^{n}{\hat {\beta }}x_{i}^{2}=\sum _{i=1}^{n}Y_{i}x_{i}}$

Take beta out and divide by Sum xi2:

${\displaystyle {\hat {\beta }}={\frac {\sum _{i=1}^{n}Y_{i}x_{i}}{\sum _{i=1}^{n}x_{i}^{2}}}}$

And therefore the least squares estimator for β, is given by

${\displaystyle {\hat {\beta }}={\frac {\sum _{i}^{n}x_{i}Y_{i}}{\sum _{i}^{n}x_{i}^{2}}}}$
${\displaystyle {\hat {\beta }}={\frac {\sum _{i}^{n}x_{i}y_{i}}{\sum _{i}^{n}x_{i}^{2}}}}$

## Biasedness

${\displaystyle B({\widehat {\theta }})=\operatorname {E} ({\widehat {\theta }})-\theta }$
${\displaystyle B({\widehat {\theta }})=0}$

So check if

${\displaystyle \operatorname {E} ({\widehat {\theta }})=\theta }$

Let us use this on the estimator in a):

${\displaystyle {\widehat {\beta }}={\frac {\sum _{i}^{n}x_{i}Y_{i}}{\sum _{i}^{n}x_{i}^{2}}}}$

Now let us fill in the proposed population model into this equation:

${\displaystyle \operatorname {Y} _{i}=\beta x_{i}+\epsilon _{i}}$

and you will get:

${\displaystyle {\widehat {\beta }}={\frac {\sum _{i}^{n}x_{i}(\beta x_{i}+\epsilon _{i})}{\sum _{i}^{n}x_{i}^{2}}}}$

Which is:

${\displaystyle {\widehat {\beta }}={\frac {\sum _{i}^{n}(\beta x_{i}^{2}+\epsilon _{i}x_{i})}{\sum _{i}^{n}x_{i}^{2}}}}$

And this can be written as:

${\displaystyle {\widehat {\beta }}={\frac {\sum _{i}^{n}\beta x_{i}^{2}}{\sum _{i}^{n}x_{i}^{2}}}+{\frac {\sum _{i}^{n}\epsilon _{i}x_{i}}{\sum _{i}^{n}x_{i}^{2}}}}$
${\displaystyle {\widehat {\beta }}=\beta +{\frac {\sum _{i}^{n}\epsilon _{i}x_{i}}{\sum _{i}^{n}x_{i}^{2}}}}$

According to biasedness equation above we can take the expectation from both sides according to:

${\displaystyle \operatorname {E} ({\widehat {\beta }})=\beta +\operatorname {E} ({\frac {\sum _{i}^{n}\epsilon _{i}x_{i}}{\sum _{i}^{n}x_{i}^{2}}})}$

and as

${\displaystyle \operatorname {E} (\epsilon _{i})=0}$

you get:

${\displaystyle \operatorname {E} ({\widehat {\beta }})=\beta }$

Therefore unbiasedness applies.

## Sigma

${\displaystyle y_{i}=\beta x_{i}+\varepsilon _{i}}$

rearrange:

${\displaystyle {\widehat {\varepsilon }}_{i}=y_{i}-{\widehat {\beta }}x_{i}}$

## SSR

${\displaystyle {SSR}=\sum _{i}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}=\beta ^{2}\sum _{i}^{n}({\hat {x}}_{i}-{\bar {x}})^{2}}$

${\displaystyle {\hat {\beta }}={\frac {\sum (x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum (x_{i}-{\bar {x}})^{2}}}}$
${\displaystyle {\hat {\alpha }}={\bar {y}}-{\hat {\beta }}{\bar {x}}}$
${\displaystyle {\hat {\sigma }}^{2}={\frac {SSR}{n-2}}}$

${\displaystyle {t}={\frac {{\hat {\beta }}-\beta }{s_{\hat {\beta }}}}}$

## sigma

${\displaystyle {\hat {\epsilon }}_{i}=y_{i}-{\hat {\beta }}x_{i}}$
${\displaystyle {\hat {\epsilon }}_{i}=\left(\beta x_{i}+\epsilon _{i}\right)-{\hat {\beta }}x_{i}}$
${\displaystyle {\hat {\epsilon }}_{i}=\left(\beta -{\hat {\beta }}\right)x_{i}+\epsilon _{i}}$
${\displaystyle {\hat {\sigma }}^{2}={\frac {1}{n-1}}\sum {\hat {\epsilon }}_{i}^{2}}$

Take the variance of beta-hat:

${\displaystyle V\left({\widehat {\beta }}\right)=V\left({\frac {\sum _{i}^{n}x_{i}Y_{i}}{\sum _{i}^{n}x_{i}^{2}}}\right)}$

Fill in the estimator of a):

${\displaystyle V\left({\widehat {\beta }}\right)=V\left({\frac {\sum _{i}^{n}x_{i}Y_{i}}{\sum _{i}^{n}x_{i}^{2}}}\right)}$

Now use the equation for Yi:

${\displaystyle y_{i}=\beta x_{i}+\varepsilon _{i}}$

and fill that in the previous equation and you get:

${\displaystyle V\left({\widehat {\beta }}\right)=V\left({\frac {\sum _{i}^{n}x_{i}(\beta x_{i}+\varepsilon _{i})}{\sum _{i}^{n}x_{i}^{2}}}\right)}$

Two typical properties of variance when X and Y are independent are the following:

${\displaystyle {Var}\left(X+Y\right)=Var(X)+Var(Y)}$

and

${\displaystyle {Var}\left(aX+b\right)=a^{2}Var(X)}$

this will be used:

${\displaystyle V\left({\widehat {\beta }}\right)=V\left({\frac {\sum _{i}^{n}(\beta x_{i}^{2}+\varepsilon _{i}x_{i})}{\sum _{i}^{n}x_{i}^{2}}}\right)}$
${\displaystyle V\left({\widehat {\beta }}\right)=V\left({\frac {\sum _{i}^{n}\beta x_{i}^{2}}{\sum _{i}^{n}x_{i}^{2}}}+{\frac {\sum _{i}^{n}\varepsilon _{i}x_{i}}{\sum _{i}^{n}x_{i}^{2}}}\right)}$
${\displaystyle V\left({\widehat {\beta }}\right)=V\left({\frac {\sum _{i}^{n}\beta x_{i}^{2}}{\sum _{i}^{n}x_{i}^{2}}}+{\frac {\sum _{i}^{n}\varepsilon _{i}x_{i}}{\sum _{i}^{n}x_{i}^{2}}}\right)}$
${\displaystyle V\left({\widehat {\beta }}\right)=V\left(\beta +{\frac {\sum _{i}^{n}\varepsilon _{i}x_{i}}{\sum _{i}^{n}x_{i}^{2}}}\right)}$
${\displaystyle V\left({\widehat {\beta }}\right)=x_{i}^{2}V\left({\frac {\sum _{i}^{n}\varepsilon _{i}}{\sum _{i}^{n}x_{i}^{2}}}\right)}$

Using the definition given for Variance of epsilon:

${\displaystyle Var\left({\varepsilon _{i}}\right)=\sigma ^{2}}$
${\displaystyle V\left({\widehat {\beta }}\right)={\frac {\sigma ^{2}}{\sum _{i}^{n}x_{i}^{2}}}}$

## Sjabloon

Afbeelding:Crystal Clear action player play.png

Informatie afkomstig van https://nl.wikibooks.org Wikibooks NL.
Wikibooks NL is onderdeel van de wikimediafoundation.