prompt
stringlengths 32
8.7k
| chosen
stringlengths 39
7.87k
| rejected
stringlengths 33
8.09k
| source
stringclasses 15
values |
|---|---|---|---|
At first glance, this seems like such a simple question of "What's the highest point on Earth". However, I also know that the Earth isn't perfectly round. So that "highest point" may be in a relative valley.
Also, because it's non-spherical, the "center" may not be easily obvious either. So, I'm curious if there are different answers based on different definitions of "center" (such as geographic center versus center of mass).
So, what is the point on the Earth's surface farthest from the center of the Earth? Is this different based on different definitions of "center"?
|
It's Chimborazo, Ecuador , but only just, beating Huascarán, Peru , by less than 50 metres. Both are over 2 km 'higher' than Everest.
I made a plot of some mountains — height above centre of the earth vs absolute latitude. You can download the IPython Notebook source code here . Warning: v. hacky.
I can't find anything on the position of the centre of the earth. The formula I used for the latitude-dependent radius requires major (equatorial) and minor (polar) radii, but I don't have citations for them either. Argus in his article Defining the translational velocity of the reference frame of Earth gave some numbers for its temporal variance, but I have no idea how this might affect these mountain heights.
Last thing: Apparently, the floor of the Arctic Ocean is the closest point on the surface to the Earth's center (about 6353 km, 30 km 'below' Chimborazo), if you call the bottom of the sea the 'surface'.
|
Mount Chimborazo , which is 6,268 meters above sea level and within 1.5 degrees of the equator.
More specifically, according to Dr. Milbert, Chief Geodesist, NOAA, National Geodetic Survey and Dr. Shum, Geodetic Science & Surveying, Ohio State Univ. :
distance from Earth's center of mass, with an uncertainty of only +/- 2 meters:
Mt. Chimborazo - 6384.459 kilometers
Mt. Huascaran - 6384.372 kilometers
Mt. Cotopaxi - 6384.062 kilometers
Mt. Kilimanjaro - 6383.955 kilometers
|
HuggingFaceH4/pmp-stack-exchangedata/earthscience.stackexchange.com
|
Suppose you have an alphabet with countably many letters. Every letter has a particular weight (for instance, as in the game of Scrabble). There are a total of $n^2$ letters that have weight $n$.
Given any word in this alphabet, let the weight of that word be the sum of the weights of its letters (again, as in Scrabble). It follows that there are roughly exponentially many words of weight $W$.
I am sampling words of weight $W$, uniformly at random. My somewhat vague question is: what does a ``generic'' word look like, for large $W$? This can be made precise in a few ways:
What is the expected value of the number of letters comprising a word of weight $W$? How many of these letters are expected to be distinct?
Does a generic word of weight $W$ have a letter that appears only once?
What is the expectation for the number of letters that appear only once in the word?
This is quite far from my field of expertise, so even simple pointers to references are much appreciated.
|
Here is a back of envelope computation. There will be no rigor whatsoever, just a cookbook approach that may be acceptable to a physicist but which every self-respecting mathematician should frown upon. It can give a plausible (but not guaranteed) answer to some of your questions if we just want the general order of magnitude without too high precision but if you want more than that, we'll need to count in honest.
If what I write below looks like gibberish, it's OK. As I said, I just did the calculus, not the actual analysis. If your model corresponds to some physical reality and you know the values of some constants involved, it will be funny to compare them with what follows from my predictions. If you want more, like the distribution laws for the number of occurrences of individual letters (or, God forbid, joint distribution for several letters), you need to consult a true expert, not an amateur like myself. In any case, have fun reading and feel free to ask questions. Also watch for idiotic mistakes in algebra: it is quite late here now... :-)
The main local generating function for counting words of weight $W$ assuming typical length about $L$ and decent length concentration (the point is to kill factorials in the denominator; we will still be be off by the factor $e^L$, but we need a typical structure, not the counts).
$F(z,w)=\exp(Lz(w+4w^2+9w^3+16w^4+\dots))=\exp\left(Lz\frac{w+w^2}{(1-w)^3}\right)$
Everything is nice and positive, so the mountain pass in the circle method is a sure bet. We need to find the controlling radius $r$ for the weight $W$, so, for $z=1$, we need to minimize
$Lz\frac{r+r^2}{(1-r)^3}-W\log r$.
Alas, we are certain to end up with $r$ noticeably less than $1$, so we have to differentiate in honest:
$$
r\frac{4+r-2r^2}{(1-r)^4}=\frac WL
$$
We also need the consistency equation, which says that we, indeed, get $L$ as a typical length with $z=1$, i.e., $\frac\partial{\partial z}(Lz\frac{r+r^2}{(1-r)^3}-L\log z)|_{z=1}=0$, i.e., $r+r^2=(1-r)^3$ so $r=0.284774761\dots$ and
$W/L=4.486414\dots$.
This is a bit counterintuitive because it predicts, in particular, that the typical
word consists mainly of the letters of low weights and the option to diversify, which should lead to more possibilities, is really pretty useless. Let's run the sanity check. Suppose that we are looking at words using a few low weight letters. Then they are just $A^W$ with some fixed $A>1$. Let us add a letter of weight $n$ in proportion $p$. We get the new number about $A^{(1-pn)W}\left(\frac ep\right)^{pW}$, so to get the largest count, we have to solve $Cn=-\log p$ whence the portion of letters of length $n$ is decaying exponentially in $n$. Thus, the cost of diversification is prohibitively high here and the answer we got makes sense.
Now, how many different letters typically? The generating function to consider now is
$$
G(s)=\prod_{k\ge 1}[1+s(e^{Lr^k}-1)]^{k^2}\,.
$$
The typical number $D$ of distinct letters would correspond to the zero derivative of $\log G(s)-D\log s$ with respect to $s$ at $1$. Thus, we expect something like
$$
\sum_k k^2(1-e^{-Lr^k})\approx\int_0^\infty t^2(1-e^{-Le^{-\rho t}})\,dt
$$
where $\rho=-\log r$. The second factor is just a sharp cutoff at $t=\frac{\log L}{\rho}\approx\frac{\log W}{\rho}$, so, let's say
$$
D\approx \frac 1{3\rho^{3}}\log^3 W= 0.168209\log^3 W.
$$
What's next? Ah, the typical number $U$ of letters appearing once! Now we need to place $s$ only on one term in the exponent, so we go to
$$
H(s)=\prod_{k\ge 1}[e^{Lr^k}-Lr^k(1-s)]^{k^2}\,.
$$
Same routine with setting the derivative of $\log H(s)-U\log s$ to $0$, we get
$$
H\approx\sum_k k^2 Lr^k e^{-Lr^k}\,.
$$
This one is harder because we run through the bump very quickly, i.e., in constant time. It also suggests an "oscillating asymptotics", i.e., that the arithmetic nature of $W$ introduces an effect that does not decay with size. However, we can still play our usual game around $\log L/\rho$ and get the quasi-asymptotics
$$
0.7927\log^2 W \le U\le 0.7996 \log^2 W\,.
$$
Alas, as I said, the quasi-asymptotics is, probably, all we can hope for here.
|
Sorry for the long silence. Believe it or not, it was impossible to find even 10 minutes until now, not a full hour I need to explain everything in a decent way. Even now I'm starting but I'm not at all sure I'll finish. I'll try to do it in as few shots as possible but I apologize in advance if I will need to bump this thread a few times.
Part 1. Generating functions and counters.
The key idea is that if $A_j$ are some sets of non-negative integers and $N_a$ is the number of ordered representations $a=a_1+a_2+\dots$ with $a_j\in A_j$, then
$$
\sum_{a\ge 0} N_az^a=\prod_j\left(\sum_{a_j\in A_j} z^{a_j}\right)\,.
$$
Unfortunately, this simplest form is not quite suitable for the weighted letter word counting. However, we can tweak this formula a bit.
Suppose we have an alphabet with letter weights $w_1,w_2,\dots$ and can use any letter any number of times. The number of words of weight $W$ we can create is
$$
\sum_{\sum_j \ell_jw_j=W}\frac{\left(\sum_j\ell_j\right)!}{\prod_j\ell_j!}
$$
(the standard combination with repetitions formula).
Thus, taking into account that $\sum_{\ell\ge 0}\frac{Z^\ell}{\ell!}=e^Z$, we get almost what we want for the coefficient at $z^W$ if we consider the product
$$
\prod_j \exp(z^{w_j})
$$
The only problem is that each word is counted not with the weight $1$, as it should in the uniform sampling, but with the weight that is the inverse factorial of its length, which skews the uniform distribution quite a bit. The way to compensate for that is to guess the typical length $L$ and to change the function to
$$
\prod_j \exp(Lz^{w_j})
$$
Now, the coefficient is multiplied by $L^{\sum_j\ell_j}$, which is approximately proportional to $\left(\sum_j\ell_j\right)!$ as long as $\sum_j\ell_j\approx L$. As a matter of fact, this weighing emphasizes the words of the length $L$ a bit stronger than it should because $\frac{L^\ell}{\ell!}$ is maximized at $L$. However, as long as $\ell-L=o(\sqrt L)$, the skewing it introduces is negligible. This is what I called the local counting function: the weights are way too much suppressed outside a small window but we hope that outside that window we have only a small portion of words anyway, so suppressing them even further changes nothing in the picture.
If we have a clear idea of what the typical length is, we can introduce all other kinds of counters. To count the number of distinct letters, we need a variable that appears only once if the letter is used at all no matter how many times the letter is used after that. The factor $1+s(e^{Lz^{w_j}}-1)$ does exactly that if you look at the "typical" power of $s$ in the expansion (recall that $\frac{z^{\ell w_j}}{\ell!}$ corresponds to using the $j$-th letter $\ell$ times, so $s$ should appear once if $\ell>0$ and not appear if $\ell=0$). The unique letter counter should appear only if $\ell=1$, so $e^{Lz^{w_j}}-(1-s)Lz^{w^j}$ does the job adding the $s$-factor to the "linear term" in the expansion of the exponent but not anywhere else. You can now play a bit setting various counters yourself to see what generating functions to consider in various cases.
Part 2. The central term extraction.
Suppose now that we have a function $F(s,r)=\sum_{\ell,w}N(\ell,w)s^\ell r^w$ of two variables (you can trivially generalize this to more than two variables as well but I do not want to do the one-variable case because some games you can play with several variables would be invisible there). Suppose that we want to estimate $N(L,W)$. The obvious upper bound (the whole is larger than its part) is
$$
N(W,L)\le s^{-L}r^{-W}F(s,r)
$$
where we are free to choose $s$ and $r$. Of course, we are going to choose them so that the right hand side is as small as possible. This leads to the minimization problem
$$
G(s,r)-L\log s-W\log r\to\min\,.
$$
Suppose that $(1,r)$ is a stationary point of the objective function, i.e., the differential vanishes there. Suppose also that, after switching to the variables $\log s,\log r$ (which makes the subtracted terms linear), the second differential is bounded by
$A^2(ds)^2+B^2(dr)^2$ near this stationary point. Then we can easily control the sum of all terms that correspond to pairs $\ell,w$ that differ a lot from the pair $(L,W)$ in $F(s,r)$ by looking at $F(se^\sigma,re^\rho)$. We still have
$$
N(\ell,w)s^\ell r^w\le e^{-\sigma\ell-\rho w}F(se^\sigma,re^\rho)=
e^{-\sigma(\ell-L)-\rho (w-W)} e^{-\sigma L-\rho W}F(se^\sigma,re^\rho)\le
e^{-\sigma(\ell-L)-\rho (w-W)}e^{A^2\sigma^2+B^2\rho^2}F(s,r)
$$
due to the stationarity and the second derivative estimate.
Now, choosing $|\sigma|=A^{-1},|\rho|=B^{-1}$, we see that
$$
N(\ell,w)s^\ell r^w\le \exp(-A^{-1}|\ell-L|+B^{-1}|w-W|)F(s,r)\,,
$$
so the terms with $|\ell-L|>A'$ or $|w-W|>B'$ can contribute at most $[Be^{-A'/A}+Ae^{-B'/B}]$ to $F(s,r)$. This crude bound is often enough to show that the main contribution comes from the terms with $w\approx W$, $\ell\approx L$, which sort of allows us to say that the pair $(L,W)$ is typical in the weighted counting where the pair $(\ell,w)$ has the weight $N(\ell,w)s^\ell r^w$. The key idea is that if $s=1$, then this counting is uniform in $\ell$ for each fixed $w$, so getting a typical pair in such weighted counting is essentially the same as getting a typical $L$ for fixed $W$.
Part 3. Circle method and mountain pass.
To be continued . . .
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Summary : Am I correlating two independent variables? Is that the problem?
Let's say I have data point for square footage of a house and the asking price. Now, I can ask "Does square footage (x) determine price (y)?" This is intuituve, and makes sense. R-squared would end up saying "Square footage explains X% of the variation in price". So far, so good.
But, what if I want to predict the square footage from the price? That seems valid. Now, I ask "Does price (x) determine square footage (y)?" So far, it seems either can function as the independent or dependent variable. However, the wording or r-squared seem off. "Price explains X% of the variation in square footage". Huh? Square footage is not some sort of multi-factor variable. It's more static. Nothing "explains" the square footage, it just is. Get what I'm saying? Like if price only explains x% of square footage, what else would explain square footage? Square footage is just square footage. It's not like price, which can be determined by many things (square footage, renovations, size of yard, etc).
Another example can be age (x) and the mileage on a car (y). With a regression equation, I can use one to predict the other. Either order seems to work. However does age "explain" the mileage, or does mileage "explain" the age? In this case, both seem weird. Both are just static independent variables. Neither explains the other, if you ask me.
What am I missing here? Thanks!
|
Your wording is implying causality, which is not what the R^2 represents. "Does price (x) determine square footage (y)?" is implying causality which is not what is captured through a correlation. "Price explains X% of the variation in square footage" describes that there is a relationship between price and square footage, but not a causal one. This only implies that these variables vary together, not that price causes square footage . Its more akin to saying " In general, when price goes up X amount, square footage happens to go up Y amount "
|
Your example can legitimately be run the other way. Why not estimate square footage from price? Suppose price data is publicly available, but square footage is not. Yet you want to estimate square footage (to determine the carpet or furniture market, the likely heating cost, or whatever). It's perfectly valid to model square footage as a function of price.
In my opinion, you are getting hung up in the semantics of "independent" and "dependent" variables. Better to use "predictor" and "predicted".
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
There's a natural map $f:\overline{\mathcal{M}}_{1,1}\to \overline{M}_{1,1}\cong \mathbb{P}^1$ from the stack of elliptic curves to the coarse space. Both spaces have $Pic=\mathbb{Z}$ hence $f^*:\mathbb{Z}\to\mathbb{Z}$ is an homomorphism. What homomorphism? My guess is: $x \mapsto 24 x$ since the generator of the stack is the Hodge class, which has degree 1/24. Do you agree?
|
I believe the number is 12.
I will assume the characteristic of the base field is not 2 or 3 so that I can use $\overline{\mathcal{M}}_{1,1} \simeq \mathbf{P}(4,6)$. Recall that $\mathbf{P}(4,6)$ is constructed by dividing $V = \mathbf{A}^2 \smallsetminus \{ (0,0) \}$ by the weight $(4,6)$-action of $\mathbf{G}_m$.
The line bundle $\mathcal{O}(1)$ (which coincides with the Hodge bundle and generates the Picard group) can be constructed as the equivariant line bundle $V \times \mathbf{A}^1$ on $V$ with $\mathbf{G}_m$ acting by weights $(4,6,1)$. Then $\mathcal{O}(4)$ has a canonical section $g_4$ and $\mathcal{O}(6)$ has the section $g_6$. In $\mathcal{O}(12)$ we have the two sections
$1728 g_4^3$
$\Delta = g_4^3 - 27 g_6^2$.
This linear series defines $j : \mathbf{P}(4,6) \rightarrow \mathbf{P}^1$.
Note that $j$ has degree $1/2$ because of the generic automorphism on $\mathbf{P}(4,6)$. That is, $j_\ast j^\ast$ is multiplication by $1/2$. Thus the Hodge class $\lambda$ satisfies
$\int_{\overline{\mathcal{M}}_{1,1}} \lambda = \int_{\mathbf{P}(4,6)} c_1(\mathcal{O}(1))
= \frac{1}{12} \int_{\mathbf{P}(4,6)} c_1(j^\ast \mathcal{O}(1)) = \frac{1}{12} \int_{\mathbf{P}^1} j_\ast j^\ast c_1(\mathcal{O}(1)) = \frac{1}{24}$.
Another thing that may be confusing here is that $j$ is generically unramified, so that a local equation for a point in $\mathbf{P}^1$ pulls back under $j$ to a local equation in $\overline{\mathcal{M}}_{1,1}$. Thus $j^\ast \Delta = \delta$ (if $\Delta$ denotes the boundary in the coarse moduli space and $\delta$ the boundary in the stack).
|
Let us choose $\Delta_{irr}$ the point representing the class of a nodal curve as generator of $Pic(\overline{M}_{1,1})\cong\mathbb{Z}$. Let $\delta_{irr}$ be the corresponding boundary divisor in $\overline{\mathcal{M}}_{1,1}$, and let $f:\overline{\mathcal{M}}_{1,1}\rightarrow\overline{M}_{1,1}$ be the canonical morphism between the stack and its coarse moduli space.
Since a nodal curve $[C]\in\Delta_{irr}$ of arithmetic genus $1$ has two automorphisms (identity and elliptic involution) by Proposition $3.92$ of "Harris-Morrison Moduli of curves" we have:
$$\delta_{irr}=\frac{1}{Aut(C)}f^{*}\Delta_{irr} = \frac{1}{2}f^{*}\Delta_{irr}.$$
Now, by Theorem 6.9 of Hain's notes:
http://arxiv.org/pdf/0812.1803v2.pdf
the Hodge class $\lambda$ generates $Pic(\overline{\mathcal{M}}_{1,1})$ and furthermore we have
$$\mathcal{O}_{\overline{\mathcal{M}}_{1,1}}(\delta_{irr}) = 12\lambda\in Pic(\overline{\mathcal{M}}_{1,1}).$$
Finally
$$f^{*}\Delta_{irr} = 2\delta_{irr} = 2\cdot12\lambda = 24\lambda$$
and the homomorphism
$$f^{*}:Pic(\overline{M}_{1,1}) = \left\langle\Delta_{irr}\right\rangle\rightarrow Pic(\overline{\mathcal{M}}_{1,1}) = \left\langle\lambda\right\rangle$$
is given by $n\mapsto 24n$ as you predicted.
I guess that the underlying fact is that $\overline{\mathcal{M}}_{1,1}\cong\mathbb{P}(4,6)$. In order to pass to the coarse moduli space $\overline{M}_{1,1}\cong\mathbb{P}^{1}$ you have to take into account the two points of $\mathbb{P}(4,6)$ with stabilizers $\mathbb{Z}_{4}$ and $\mathbb{Z}_{6}$ ($lcm(4,6) = 12$) but also the fact that $\mathbb{P}(4,6)$ is not well-formed meaning that $4,6$ are both divided by $2$. So the general point of $\mathbb{P}(4,6)$ has stabilizer $\mathbb{Z}_{2}$ which corresponds to the elliptic involution of the general elliptic curve.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Suppose a rigid body has mirror symmetry along the $z$ -Axis, i.e. $\rho(x,y,z)=\rho(-x,-y,z)$ where $\rho$ is the density of the body.
How can I show from this that the center of mass lies on the $z$ -Axis and that those non-diagonal entries of the inertia tensor corresponding to $z$ vanish?
Both statements are very intuitive, but I would like to prove it formally.
I thought that maybe cylindrical coordinates would help, but those don't get me anywhere either.
Any hint or advice is very much appreciated!
|
This is a volume obtained by revolving $z= f(\sqrt{x^2+y^2}) = f(r) $ around z-axis. The x coordinate of the center of mass is given by
$$ { \bar x } =\dfrac{\int\int \int(\rho\cdot r\sin \theta\, r\, d\theta \,dr\, dz)}{\int\int\int(\rho\, r\, d\theta \,dr\, dz)} $$
The numerator can be expressed as
$$ {\int_0^{2 \pi} (\sin \theta \,d\theta) \cdot \int \int(\rho \,r^2 \,dr\, dz)}$$
which vanishes.
Similarly for the y-coordinate the ${\bar y }$ center of mass is zero, $\bar z$ is non-zero, so CM lies on z-axis.
|
Try doing this with summation over discrete points first to see how the logic works. Most texts do this. Consider point masses each with M, located symmetrically in the x-y plane about the z axis. For example (the simplest example), M at (x, 0, 0) and M at (-x, 0, 0). Clearly the equations for CoM and product of inertia in the x-y plane will cancel.
2M xcm = M x + M*(-x) = 0;
M(x y) + M(-x y) = 0
Now generalize this to integrals. As the comment suggests, divide up your integrals over x and y into two parts, -infinity to 0 and 0 to +infinity. Then you need to do some manipulation of the integrand to make the integrals along the negative axis match those along the positive axis. You will get the same expression with a relative minus sign.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I was performing an experiment in which there were five resistances connected in series and then I connected a bulb with each of the resistance. Each bulb was connected in parallel to each resistance and I observed that The bulb with the highest resistance lit brightest.
All the bulbs had the same resistance of 10 ohms and as it was a series circuit definitely there was same current passing through every point on the circuit.
But the current through the largest resistance or the highest resistance was quite ‘Low‘ and at the same time the current passing through the bulb that was connected with the highest resistance was ‘Hi’ so I just thought that it might be because that resistance was high and the resistance of the bulb was low so the current found the easiest path and just passed through there.
But there must be a reason according to some formula that proves this phenomenon.
|
First, when you connect a bulb in parallel with the each resistor, you no longer have a series circuit so it isn't true that the current through each circuit element is identical.
Second, the equivalent resistance of each parallel resistor -bulb combination of is
$$R_{i,eq} = 10\,\Omega || R_i = \frac{10\,\Omega \cdot R_i}{10\,\Omega + R_i},\quad 1\le i \le 5$$
Thus, $R_{i,eq} < 10\,\Omega$ , and the larger the value of $R_i$ , the closer $R_{i,eq}$ is to $10\,\Omega$ .
Now, by voltage division, the largest voltage is across the largest equivalent resistance and, of course, the bulb with the largest voltage across will glow brightest.
Then, by current division, the larger the value of $R_i$ , the larger the proportion of current through the bulb.
|
There is a simple and elegant way to understand why this happens.
When two resistances are connected in parallel, say $R_1$ and $R_2$ , their effective resistance is given by $R= \frac{R_1 R_2}{R_1+R_2}$ . Here, $R$ will be smaller than the smallest of the two resistances. $R$ will approach the smaller of the two resistances as the other resistance increases. You can check this by calculating the limit when one of the resistances tends to $\infty$ , which I leave up to you.
Coming back to your question, the resistance of the combination of the highest resistance and the bulb will be the highest in the circuit. Hence, the potential drop across it will be the highest because, from Ohm's law $V=IR$ and since $I$ is same for all the resistance-bulb combinations, $V \propto R$ . Now, the power dissipated will be given by $\frac{V^2}{R_b}$ where $R_b$ is the resistance of the bulb. Since the potential drop across it is the highest, power dissipated by it will be highest and hence, it will be brightest!!
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
My experiment involves looking at the precentage of cells in a slide that have a particular property under condition A and condition B. I have one slide for each condition. I have done this experiment 3 times and lets assume the data looks like this:
Cond.A Cond.B
Exp. 1 32% 40%
Exp. 2 31% 41%
Exp. 3 35% 44%
Can I now do a T-test between conditions A and B? Or should I just pool all the fractions per condition and do categorical data statistics?
I assumed I can do a t-test because at least within the realm of this data, it seems continuous and uncensored. Or am I wrong?
|
The most obvious problem I see is that there is quite a large number of 0s in your data. I would start with investigating those. Are these real 0s or did something go wrong during the measurement? How would you want to model those 0s? Are there special reasons that make it realistic that they happen, but do these reasons need to be part of your model? etc.
|
As Maarten rightly pointed out there are a number of 0's in the data that were due to sampling from the centromic regions and hence there will be no measurement of this.
When I filter these regions out I then find a much better plot of the data.
Thanks for your help :)
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
Image we have an ultra-high intensity, ultra low frequency laser, with wattage on the order of terawatts and a wavelength on the order of a lightsecond. We rotate it that the electric field component is oriented on the $\hat z$ axis, then fire it at a macroscopic block with a positive electric charge. Because of the low frequency the block will experience an electric that doesn't immediately change direction, and because of the high intensity the field will be very strong. So from this naive understanding of classical physics, the block will briefly levitate.
Except this blatantly contradicts both QM (Compton scattering) and multiple macroscopic experiments (like solar sails), which both say that the block will be pushed in the direction of the laser. What assumptions in the original problem are missing/wrong?
|
There will indeed be a small motion transverse to the beam. The motion will also result in a magnetic force along the beam. Now, imagine half a wave cycle later, the electric (and magnetic) field has reversed direction, the transverse motion is reversed, however the magnetic force is not reversed, it is still directed along the beam, because both the (transverse) velocity and the magnetic field have flipped sign. Over many cycles, the transverse motions cancel each other out, while the longitudinal motion does not.
|
Block doesn't levitate because the total force due to the electric field is zero as block has equal amount of positive and negative charges.According to classical physics laws if block is non-conducting, it wont even move in the direction of laser. But if its conducting, electric field will give rise to a current in its face on which laser is incident.As electrons start moving in direction opposite to electric field, they will experience magnetic force F=-e(v*B) which is in the direction of propagation of laser as you can verify using right hand rule and due to this force things move in the direction of incident light.
and for the same reason when charges start moving in transverse direction due to electric field of light,,,they experience magnetic force due to their velocity which obviously is not transverse to the light direction,
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I seem to remember a proof that if a category $C$ has coproducts and a $0$ object, then necessarily if we had objects of $C$, say $a$ and $-a$, such that $a \oplus -a \simeq 0$, then $a\simeq 0\simeq -a$.
But right now, I can't place this, nor am I 100% sure that that is the correct property. I am able to show that, using the various universal properties at play, that the morphisms in such a category are necessarily quite boring, but not that it completely collapses (i.e. if I assume that all objects have a corresponding 'negative' object).
|
With Yoneda ? For every object $X$, $Mor(a\oplus (-a),X)$ is a singleton since $a \oplus (-a)$ is initial. And $Mor(a\oplus (-a),X) \cong Mor(a,X) \times Mor((-a),X)$ is a singleton as well. So $Mor(a,X) \cong Mor((-a),X) \cong Mor(0,X)$. Hence $a \cong (-a) \cong 0$.
|
for any object $X\in C$ we have the bijections $C(a, X)\times C(-a, X) \cong C(a\oplus -a, X)\cong C(0, X)=${$0$} then the sets $C(a, X), C(-a, X)$ have one element (the 0-morphism $a\to X=a\to 0\to X$ and $-a\to X=-a\to 0\to X$). THen $a$ is a initial object as $0$ then $a\cong 0$, similary $-a\cong 0$
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Periods of operation at longitude--current as well as historical. The historical list on Wikipedia is incomplete. Is there a single site with status of operational satellites (real-time) or do you have to go to every operator you can think of?
|
The WMO OSCAR database is a list of all Earth observation satellites¹. The resulting table can be sorted by orbit type, status (inactive/operation/planned), agency, and other aspects. From their own description:
This table shows all known past, current and future satellites for meteorological and earth observation purposes.It can be sorted by clicking on the column headers. The filter on the right allows to display only specific satellites.
A screenshot showing three GOES satellites and the scroll-bar on the right is shown below:
Note that OSCAR has a lot more capabilities than this. For anything Earth-observation-from-space related, it is a superb resource.
¹Except classified (spy) satellites...
|
I queried Wikidata for "instance of" = "weather satellite" and came up with this list:
http://tools.wmflabs.org/wikidata-todo/autolist.html?q=CLAIM[31%3A209363%2C0%3A%28%29]
Please remember that Wikidata still needs a huge amount of work, before queries can be judged accurate and complete (The project just started 2 years ago). If you need help improving the listing I can help you out, because I am heavily involved in Wikidata (Or coordinate your effort with the Space WikiProject: https://www.wikidata.org/wiki/Wikidata:WikiProject_Space ). More complex queries will be possible in the future when data density increases (the query is missing the "operational status" you would like to query).
|
HuggingFaceH4/pmp-stack-exchangedata/earthscience.stackexchange.com
|
It's well known that if E is a vector bundle with Chern roots $a_1,\ldots, a_r$,
then the Chern roots of the $p$th exterior power of E consist of all sums of $k$ distinct $a_i$'s. I would like to say the same is true if E is just a torsion-free coherent sheaf on $P^n$. It seems non-obvious, though, maybe because an exterior power isn't generally an additive functor.
Presumably this is either false or also well known, but I can't find a reference.
|
My guess would be that the formula you want does not extend to the case of coherent sheaves. As indicated in Mariano and David answers (which has unfortunately been deleted), the best hope to compute is via a resolution $\mathcal F$ of $E$ by vector bundles. In general, for 2 perfect complexes $\mathcal F, \mathcal G$ of vector bundles, there is a formula for the localized chern classes
$$ch_{Y\cap Z}(\mathcal F \otimes \mathcal G) = ch_Y(\mathcal F)ch_Z(\mathcal G)$$, with $Y,Z$ being the respective support. Unfortunately, this only gives the right formula for the "derived tensor product".
So to mess up the formula, one can pick $E$ such that $Tor^i(E,E)$ are non-trivial. I think an ideal sheaf of codimension at least 2 would be your best bet for computation purpose.
|
This is really a reference for the problem in Graham's comment, rather than an answer to the question. See
Tchernev, Alexandre B. Acyclicity of symmetric and exterior powers of complexes. J. Algebra 184 (1996), no. 3, 1113--1135. MR1407888
Weyman, Jerzy. Resolutions of the exterior and symmetric powers of a module. J. Algebra 58 (1979), no. 2, 333--341. MR0540642
where the symmetric and exterior powers of finite free resolutions of modules are considered.
Under some conditions on the determinantal ideals determined by the maps in the reslutions, the complexes obtained are resolutions of the symmetric and exterior powers of the modules.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
My textbook says that in an isolated system (when there is no external force and the internal forces are conservative)the mechanical energy of the system remains constant.
It then states the example of a freely falling ball , where the sum of potential and kinetic energy of the ball is always constant.
But if we consider the ball as the system, then we have an external force (gravity) acting on the system, then why is the mechanical energy constant in this case?
|
The ball alone does not possess gravitational potential energy (GPE). GPE is a property of the ball-earth system. Therefore mechanical energy is conserved for the ball-earth system, not the ball alone.
So if I take the ball as the system, then the mechanical energy is not
conserved, right?
Correct. The ball increases kinetic energy but no where in the system (the ball alone) is there a corresponding decrease in potential energy (of any kind). Or, to put it another way, the ball acquires kinetic energy because it is not an isolated system, the gravitational force now being considered "outside" the system.
Hope this helps.
|
You need to understand this clearly in early stages , it is a very simple situation , if the ball is in free fall it had gravitational potential energy aldready so even before it started to fall it had gained that potential energy , plus it does not make much sense if you just take one object and call it a system ,in this case the ball and the earth form a system . When you say "free fall" you have aldready included earth inside your system
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
My data mydata consists of columns of x1,x2,..,x100,y in R. But I am thinking a linear model with second order terms such as y ~ x1^2 + x2^2 + x1*x2 + ... how do I achieve that within formula or in any other way in R?
When I tried above, my model pls ignored all second order terms. Do I have to manually create those columns?
|
The formula documentation for R shows how to do this. In short, you use poly() . For example, make some quadratic data:
x <- rnorm(100)
y <- x + x**2 * 0.5 + rnorm(100)
Now fit this using a second order polynomial (i.e x and x**2 ) like this
mod <- lm(y ~ poly(x, 2))
Note that this will fit an orthogonal polynomial, so it won't recover 1 and 0.5 as the coefficients in the generating distribution. If for some reason you want that, use poly(x, 2, raw=TRUE) . In general you don't for stability reasons, so stick with the cooked version.
There is also polym as in: lm(y ~ polym(x, z, degree=2) for a model with a full set of crossed variables, which is a bit more trouble to interpret, but that's presumably not important with hundreds of variables.
|
Here's how to do it in principle, illustrated on a smaller dataset with only 10 predictors:
# Make fake data
mydata = as.data.frame(matrix(rnorm(1100), 100))
names(mydata) = c(paste0("x", 1:10), "y")
# Form a matrix containing all predictor columns but not y
x = as.matrix(mydata[, 1:10])
# Use poly() to form all 2-way interactions and 2nd order terms
x2 = poly(x, degree = 2, raw = TRUE)
# Resave as a data frame including y
mydata2 = as.data.frame(cbind(x2, y = mydata$y))
# Fit the complete linear model
lm2 = lm(y ~ ., data = mydata2)
However, you have 100 predictors. In my experience, with more than 10-15 predictors, R usually cannot allocate enough memory for the matrix containing every 2-way interaction. You will get unhelpful errors or R will simply crash.
If so, consider whether you really need all 2-way interactions. Maybe just a subset would make sense. For instance, you could use poly() as above to form all 2-way interactions within one subset of x's, then again to form interactions between another subset of x's, but not have any interactions across those subsets.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
I'm trying to fully understand the confidence interval formal given on this site :
$$\hat{\mu}\pm z_{1-\alpha/2}\sqrt{\frac{\hat{\mu}(1-\hat{\mu})}{n}}$$
so I can reproduce the same type of intervals for my own data. But I don't quite understand what the parameters such as $\alpha$ and $Z$ mean. I'm guessing they're related to defining a 95% confidence interval if your data were distributed normally.
Can explain to me how that formula works or a reference I can find a description of the formula used?
|
If $\hat{\mu}$ is the mean error rate computed averaging $N$ error rates from different $N$ tests, an explanation could be:
Let $X$ be the number of errors on $N$ tests, so $X$ is a binomial distributed random variable with mean $N\hat{\mu}$ and variance $N\hat{\mu}(1-\hat{\mu})$ (it is sum of $N$ Bernoulli random variables).
Thus $X/N\sim Bin\bigg(\hat{\mu},\frac{\hat{\mu}(1-\hat{\mu})}{N}\bigg)$.
By the central limit theorem it could be approximated to a normal random variable with same mean and variance. Then you can compute the $\alpha$ confidence interval with:
$$P\bigg(-z_{1-\alpha/2}\leq\frac{\mu-\hat{\mu}}{\sqrt{\hat{\mu}(1-\hat{\mu})/N}}\leq z_{1-\alpha/2}\bigg) = 1 - \alpha$$
Bibliography:
It is similar to estimate a confidence interval for accuracy using a $N$ values test set in a classification problem. You should take a look to P.N. Tan, M. Steinbach, V. Kumar Introduction to Data Mining. Addison Wesley, 2006.
|
Answering the following part of your question:
I don't quite understand what the
parameters such as alpha and Z mean
$\alpha$ is the parameter that defines the confidence level of the interval. Specifically, the confidence level will be $100(1-\alpha)$%, so to get a 95% confidence interval, set $\alpha=0.05$.
$Z$ is a reference to the normal distribution, and in this case $z_q$ means its $q$-th quantile, that is the value for which $P(Z < z_q) = q$, where $Z$ is the standard normal distribution. This can be looked up in tables or calculated by computers. For example, when $\alpha=0.05$, the formula needs the 0.975-th quantile, that is the value which exceeds 97.5% of the normal distribution. Its value is $z_{0.975}=1.96$.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
Is there an abelian variety $A/\mathbb R$ of dimension $n$ such that $End_{\mathbb R}(A)\otimes \mathbb Q$ contains a field $K$ of degree $[K:\mathbb Q]=2n$ ? ( $End_{\mathbb R}(A)$ is the ring of $\mathbb R$ -endomorphisms of $A$ )
|
No.
Assume for contradiction that such an $A$ exists. First look at the singular cohomology $H^1(A_{\mathbb C}, \mathbb Q)$ , which admits an action of $K$ and so is a $K$ -vector space. It has dimension $2n$ over $\mathbb Q$ and so is a 1-dimensional $K$ -vector space.
Tensoring with $\mathbb C$ , we see that $H^1(A_{\mathbb C}, \mathbb C)$ , as a vector space with an action of $K$ , is a sum of $2n$ eigenspaces of $K$ associated to the $2n$ different embeddings $K \to \mathbb C$ .
Now by Hodge theory, $H^1(A_{\mathbb C}, \mathbb C) = H^1(A_{\mathbb C}, \mathcal O_A) + H^0(A_{\mathbb C}, \Omega^1_A)$ with the two summands complex conjugates of each other. So for each eigenvector appearing associated to an embedding appears in $H^1(A_{\mathbb C}, \mathcal O_A)$ , the eigenvector associated to the complex conjugate embedding appears in $H^0(A_{\mathbb C}, \Omega^1_A)$ , and thus, because the eigenspace is 1-dimensional so there is only one eigenvector up to scaling, no eigenvector associated to the complex conjugate embedding appears in $H^1(A_{\mathbb C}, \mathcal O_A)$ .
So $H^1(A_{\mathbb C}, \mathcal O_A)$ is not isomorphic to its complex conjugate as a complex vector space with an action of $K$ .
But $H^1(A_{\mathbb C}, \mathcal O_A) = H^1(A_{\mathbb R}, \mathcal O_A) \otimes_{\mathbb R} \mathbb C$ and thus is isomorphic to its complex conjugate. (Here we use that the endomorphisms in $K$ are defined over $\mathbb R$ and thus act on $ H^1(A_{\mathbb R}, \mathcal O_A) $ .)
This is a contradiction, so no such $A$ exists.
|
(Essentially the same argument as the one given by Will Sawin, but perhaps a bit simpler. Further clarification included thanks to comment by Wojowu.)
If $A$ is an abelian variety over a field $k\supset\mathbb{Q}$ , then the tangent space $T_0(A)$ at identity is a module over $\mathrm{End}_{k}(A)\otimes\mathbb{Q}$ .
Now, if the latter contains a field $K$ , then $T_0(A)$ has to have dimension at least 1 over $K$ . On the other hand $T_0(A)$ has dimension $\dim(A)$ over $k$ . Thus $[K:\mathbb{Q}]\leq\dim(A)$ .
Added : Alternate explanation of above.
Consider $A(\mathbb{R})$ as a Lie group with connected component $A(\mathbb{R})_0$ . The exponential map $T_0(A)\to A(\mathbb{R})_0$ is the universal covering of a compact torus of real dimension $n=\dim(A)$ . It is clear that elements of $\mathrm{End}_{\mathbb{R}}(A)$ lift to this cover; Let $K$ be a subfield of $\mathrm{End}_{\mathbb{R}}(A)$ . Note that $\mathcal{O}_K$ is a domain and the covering group (which is $\mathbb{Z}^{n}$ ) is a module over $\mathcal{O}_K$ . Thus the rank of $\mathcal{O}_k$ as a $\mathbb{Z}$ module is at most $n$ .
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
It may be that the base of a part of the world is anti-proton, We've always been on the planet Earth and the Milky Way.
how do we know that the base of entire universe is proton (hydrogen atom)?
|
There are two parts to this question. One is a question of naming conditions. If matter and anti-matter were created with equal probability (as they are in many processes) then it would just be a matter of convention as to which way we name them (meaning there wouldn't be a truly natural way to distinguish a proton from an anti-proton).
However, as it is not always true that matter and anti-matter production is identical in all processes, this leads to the second part of the question, why is there a preference of matter over anti-matter (where we use the standard convention for names). Charge Parity violation ( CP violation ) was predicted and observed in certain weak processes that lead to a "handedness" to the universe, which helps explain some of the imbalance, but not enough of it. In order to explain the imbalance seen in the universe , there should be CP violation observed in the interactions governed by the strong force. However, it has not been seen to date, although it might just mean that we have not yet probed to high enough energies to see it.
note: this is just a self reminder to revisit and reword
|
First of all one has to define "universe".
Let us assume that the universe contains all that we can detect with our astronomical detectors. Then, other "universes" could exist but we would only be able to hypothesize about them.
Our observations tell us that the universe, stars and galaxies and maybe galactic clusters consist of matter as we know it around us in the limit of our detection abilities. We have not detected antimatter. This because if an antimatter galaxy existed at the boundary in space between matter and antimatter, where there is a lot of space dust the annihilation of matter on antimatter would be detectable, with specific energies coming out.
Also, in the accepted cosmology model of the BIg Bang the annihilation of antimatter would affect the observed microwave background radiation in specific ways which are not seen.
We can say with assurance that our universe is mainly matter and keep on searching for isolated, undetected instances of antimatter galaxies with better and better instruments. The page of AMS .
So the answer is : by observation up to now.
As @Hal Swyers says in his answer we need a lot more CP violation in order to explain the asymmetry observed and scientists at LHC are working hard on the question , as can be seen here in this article by John Ellis, which answers your question thoroughly.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I understand the rate an EM signal broadcast uniformly from the Earth will decrease in its power is governed by the inverse square law. How far from Earth will radio and tv signals become indistinguishable from background noise from varied sources in the galaxy, even CMB.
I know nothing about the type of equipment that would be used to gather and analyze such a signal. I don't know what the geometry of the observation means. Assuming the optimal equipment and conditions we currently have and no other line-of-sight sources.
|
Yes, the orientation of the Sun will be different from the Earth's northern and southern hemisphere, just like your example of the Moon.
I would not say that a sunspot in the "northern" hemisphere would appear to be in the "southern" hemisphere just because of the change in orientation. North is fixed on the Sun and Moon, just like they are on the Earth. (If you turn a globe of the Earth "upside down", the northern and southern hemispheres do not switch, just the orientation."
The view through a telescope may change the orientation, but the directions North, South, East, and West on the Sun and Moon remain the same. The user of the telescope needs to determine which directions those are (up, down, left, right). It changes with the design of the telescope (reflector versus refractor) and the number of optical elements.
|
Yes. The observer is "flipped". In northern hemisphere the sun in in the southern half of the sky. So the bottom of the sun is toward the south horizon. The top of the sun is toward the north. By convention this side is the north of the sun (or moon). When facing the sun (at noon) north is up toward the top of your head.
In the southern hemisphere the observer is "flipped" "ups-de-down". The sun now appears in the northern half of sky. So the bottom of Sun is toward the north horizon i.e. so the top of the sun is toward the south. It looks upside-down because the top of your head is now orientated with the south.
A diagram helps better than words: Simple diagram .
In the southern hemisphere Orion is doing hand stands.
|
HuggingFaceH4/pmp-stack-exchangedata/astronomy.stackexchange.com
|
I've seen in google cirq that a $X^q$ gate is converted in openqasm to $RX(\pi q)$ , why is that?
Same for $S^q$ into $RZ(\pi q/2)$ .
|
Note that
$$RX(\phi) = \begin{pmatrix} \cos(\phi/2) & -i\sin(\phi/2) \\-isin(\phi/2) & \cos(\phi/2)\end{pmatrix}$$
Then $$RX(\pi q) = \begin{pmatrix} \cos(\pi q/2) & -i\sin(\pi q/2) \\-isin(\pi q/2) & \cos(\pi q/2)\end{pmatrix}.$$
Now, using that $\cos(\pi k + \pi/2) = 0 = \sin(\pi k)$ and $\cos(\pi k) = 1 = \sin(\pi k + \pi/2)$ for $k\in \mathbb{Z}$ and that a global phase does not physically affect the quantum state, we see that for odd $q$ we get $X$ and for even $q$ we get the identity matrix.
We can prove the other equation similarly using
$$RZ(\phi) = \begin{pmatrix} e^{-\phi/2} & 0 \\ 0 & e^{\phi/2} \end{pmatrix}$$
|
This is the matrix for $Z^t$ :
$$Z^t = \begin{bmatrix}
1&0\\0&(-1)^t
\end{bmatrix} = \begin{bmatrix}
1&0\\0&e^{i \pi t}
\end{bmatrix}$$
This is the matrix for $R_Z(\pi t)$ :
$$R_Z(\pi t) = e^{-iZt/2} = \begin{bmatrix}
e^{-i \pi t / 2}&0\\0&e^{+i \pi t / 2}
\end{bmatrix} = e^{-i \pi t/2} Z^t
$$
Which means that
$$Z^t \equiv R_Z(\pi t) \pmod{\text{global phase}}$$
Qiskit doesn't have a concept like $Z^t$ , but it does have $R_Z$ , so Cirq relies on this equality-up-to-global-phase and converts from one to the other when producing QASM. The exact same situation repeats with powers of $X$ and $Y$ .
|
HuggingFaceH4/pmp-stack-exchangedata/quantumcomputing.stackexchange.com
|
I am looking for the mechanism for thermal decarboxylation for any RCOOH, and I am guessing that its possible for the thermal energy to cleave the R-C bond homolytically leaving a R• and •COOH. Then I would guess the R• would add its electron to the H and the O-H bond would undergo homolysis as well leaving the single electrons on the C and now O to form a bond leaving RH and CO2. However I have no idea if this is what actually occurs. Thanks
|
The decarboxylation of any carboxylic acid can basically take place in four ways:
Unimolecular free radical mechanism (homolytic fission)
However, in case of thermal decomposition there has been no evidence of this type of free radical formation. Although, free radicals are formed in electrolytic or photochemical process. There is also evidence of formation of free radical in ketonic decarboxylation.
Unimolecular heterolytic fission (carbanion formation)
This is the mechanism for most carboxylic acids.
Unimolecular heterolytic fission (carbocation or carbenium ion formation)
This has not been observed in any case.
Bimolecular decarboxylation
This occurs only in case of carboxylic acids that have high electron density on the $\ce{\alpha}$-carbon that might attract the proton from the solution. Examples include anthracene-9-carboxylic acid,2,4,6-trimethylbenzoic acid etc.
[Reference:The mechanism of thermal decarboxylation. B.R. Brown. Quarterly Reviews, Chemical Society. 1951 ; http://pubs.rsc.org/-/content/articlelanding/1951/qr/qr9510500131#!divAbstract]
|
Under sufficiently alkaline conditions, i.e. when the acid is deprotonated, the carboxylate can undergo an electron transfer reaction with a suitable partner (oxidant). In the course of this process, the carboxylate is oxidized to an acyloxy radical , which subsequently fragments to yield an alkyl (or alkylaryl) radical and carbon dioxide.
$$\ce{R-COO- ->[-e^-] R-COO\cdot -> R\cdot + CO2}$$
A photochemical variant of this reaction was intensively examined in the group of Axel Griesbeck at the University of Cologne, Germany. For further reading, have a look at
Acc. Chem. Res. , 2007 , 40 , 128-140 ( DOI )
J. Phys. Chem. A , 2006 , 110 , 3356-3363 ( DOI )
Synlett , 2004 , 2347-2350 ( DOI )
|
HuggingFaceH4/pmp-stack-exchangedata/chemistry.stackexchange.com
|
Is there any known result on approximating an arbitrary tree metric by an HST metric (or an Ultrametric)? What is the distortion? Thanks.
|
The distortion of embedding any $n$-point metric in ultrametric is at most $n-1$, and on the other hand, the distortion of embedding the path metric $P_n$ in ultrametric is at least $n-1$.
Similarly, if you are interested in probabilistic embedding, then by Fakcharoenphol, Rao, and Talwar result mentioned above, any $n$-point metric space probabilistically embeds in ultrametric with distortion $O(\log n)$.
On the other hand, the distortion of probabilistically embedding $P_n$ in ultrametric is $\Omega(\log n)$.
|
There is a classic result by Fakcharoenphol, Rao, and Talwar showing that any $n$ point metric space can be embedded into an HST metric with expected distortion $O(log\, n)$.
One should keep in mind that this result holds only for probabilistic embeddings, since there are metrics (such as the $n$-cycle) which will give you an $\Omega(n)$ distortion if you only accept deterministic embeddings.
|
HuggingFaceH4/pmp-stack-exchangedata/cstheory.stackexchange.com
|
The Wiener measure is (in the classical sense) a Gaussian measure on the Banach space $C[0,1]:=\{f:[0,1] \to \mathbb{R} \mid f\text{ is continuous and } f(0)=1\}$ .
The Wiener process is a stochastic process whose definition can be found in any textbook. In any text, the stochastic integrals or stochastic differential equations use the notation $dW$ frequently, which should denote the Wiener measure, I suppose.
However, I cannot figure out the exact relation between the Wiener 'process' and 'measure'. Wikipedia says that the Wiener process induces the Wiener measure but what exactly does that mean?
I am afraid this question might not belong to MO but I ask here. Could anyone please clarify and help me understand?
|
The Wiener measure $w$ is the distribution of the Wiener process/random function $W$ on $C[0,1]$ ; that is,
$$P(W\in A)=w(A)$$
for all Borel sets $A\subseteq C[0,1]$ . Here "Borel sets" can be replaced by "open sets" or "closed sets".
Equivalently,
$$Ef(W)=\int_{C[0,1]}f\,dw$$
for all (say) nonnegative Borel-measurable functions $f\colon C[0,1]\to\mathbb R$ . Here "nonnegative Borel-measurable" can be replaced by (say) "bounded continuous".
|
We have to differentiate between the probability measure $w$ as a measure on the space $(C[0,1],\mathcal{B}(C[0,1]))$ , the Wiener measure and the concept of an Ito integral $\int_{[0,t]} X_s dW_s$ , $0 \leq t \leq 1$ , where $W$ is the usual Wiener process. The Wiener measure is only a special probability measure. Of much more interest is of course the Ito integral. This is definitely not a special case of the integral in the answer of Iosif Pinelis. It requires different concepts. There are many good introductions to the Ito integral, f.i. Karatzas/Shreve (1988), Brownian Motion and Stochastic Calculus, to mention only one.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
As far as I know, both autoencoders and t-SNE are used for nonlinear dimensionality reduction. What are the differences between them and why should I use one versus another?
|
Both of them try to find a lower dimensionality embedding of your data. However, there are different minimization problems. More specifically, an autoencoder tries to minimize the reconstruction error, while t-SNE tries to find a lower dimensional space and at the same time it tries to preserve the neighborhood distances. As a result of this attribute, t-SNE is usually preferred for plots and visualizations.
|
Autoencoder is designed to preserve previous data in a 2-norm sense, which can be thought as preserve the kinetic energy of the data, if data is velocity.
While t-SNE, use KL divergence which is not symmetrical, it will lead to t-SNE focus more on local structure, while autoencoder tends to keep overall L2 error small, which is in a global sense.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
I was studying about conservative forces from a physics book (NCERT, a standard Indian textbook) and came to a para which is as follows:
A force is conservative if it can be derived from a scalar quantity $V(x)$ by the relation given by $dV(x)=-F(x)dx$ . The three dimensional generalisation requires the use of vector derivative, which is outside the scope of this book.
What I didn't understand is what does it mean by 3D generalisation?
|
The problem is three dimensional, not on a 2D sphere.
The crucial point here is that space is homogeneous. Any point in space can be taken as the origin. So suppose you have an observer. You are perfectly allowed to decide that the position of this observer is the origin of your system of coordinated. So he is sitting at the point $r_0=0$ and there, $\phi$ and $\theta$ are not defined.
Now what is a geodesic passing through this point ? Well, they are all the "straight lines" where $r$ varies from $0$ to positive infinity, with a fixed value of $\theta$ and $\phi$ . This is how a photon emitted from the observer will propagate. Why would it not ? Space is homogeneous. Of course this is just a half-line, the full line will consist of the two half lines $(\theta, \phi)$ and $(-\theta, \phi+\pi)$ . These are not all the geodesics, just those geodesics that pass through this observer .
The fact that space is curved, in this case, does not alter the "straight line propagation", because everything is homogeneous. "Curved space" means that if you measure the length of a circle, you will not find its diameter times $\pi$ . In the coordinates of a different observer, the "straight lines" of constant $\theta$ and $\phi$ of the first observer do not look "straight". But geodesics passing through himself will be "his straight lines", constant $(\theta, \phi)$ in his coordinate system.
Let me be more precise. Suppose you consider the geodesic =/2 (or =−/2). For these values of , is not defined. Now this is the trajectory of a photon going straight up (or down). Now in the real Universe there will be a star here, a galaxy there, the photon will deviate. But in the approximation of an homogeneous Universe, all directions are equivalent. To deviate from =/2 (or −/2) the photon would have to choose some value for . But which one? They are all equivalent ! Since it cannot choose a it must stay on =/2.
And since you can always choose your system of coordinates so that the direction you consider is (=/2, undefined), then the geodesics starting in this direction can never "choose" a to deviate in that direction rather than any other. Remember all this argument depends on the assumption that the universe is homogeneous and isotropic, that is all directions are identical. If the universe is homogeneous but not isotropic one could not choose arbitrarily any direction to be =/2. So in fact it only holds for homogeneous and isotropic Universes.
However, when one speaks of homogeneous Universes ,it is usually understood that they are isotropic too. Homogneous Universes which are not isotropic can be described mathematically, but they are weird. In such Universes, indeed, geodesics going through an observer might well not be with fixed $\theta$ and $\phi$ . But clearly the intent of Kolb and Turner were an homogeneous and isotropic Universe even if they did not point it out.
Well, the FLRW metric is definitely both homogeneous and isotropic so my argument does indeed hold.
|
I ll try to answer
Once the north and the south poles are fixed, one can imagine and draw the shortest
arc-like path connecting two arbitrary points on the sphere that is neither along a
latitude nor along any longitude. Such paths are also geodesics with varying θ and ϕ.
I ll try to explain it over the 2-sphere where we have only $(r, \theta)$ . So the metric is writtes as,
$$dl^2 = dr^2 + R^2sin(r/R)d\theta^2$$
Let us choose the north pole as $(0,0)$ and take a random point on the surface of the 2-sphere. $P_1 = (r_1, \theta_1)$ . Let's take another random point $P_2 = (r_2, \theta_2)$ .As you mentioned if $r_1 \ne r_2$ and $\theta_1 \ne \theta_2$ we can see that $dr$ and $d\theta$ are varying.
Now let us change our reference frame and put it on $(r_1, \theta_1)$ . I am not sure how to show this mathematically but imagine that your north pole is now $P_1$ .
When we choose our noth pole as $P_1$ , you can imagine that the arc length that goes from $P_1$ to $P_2$ has no $d\theta$ component ( $\theta$ does not change). But theres again still $r$ component.
But let us forget that complication for a moment and assume that light is constrained
to move on the surface of a sphere along any geodesic.
When we desribe the 2 sphere by using the spherical coordinates, and if we want to draw a line on the surface of the sphere, we would need $\phi$ or $\theta$
However by using the metric above you can draw a line on the surface of the sphere by just using the $r$ and not using $\theta$
So the answer lies on the which coordinate system/metric you choose. And how you define r, $\theta$ etc.
In the book it seems that the author is talking about the Spherical coordinates in 3D and in this case $d\theta = d\phi = 0$ means that light is just moving radially outward.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
We have that $\Delta U = Q + W$. What I don't see is how this formula relates to the law of conservation of energy. Can someone please clarify?
Does this mean that $\frac{dU}{dt}=\frac{dQ}{dt}+\frac{dW}{dt}=0$, so that $\frac{dQ}{dt}=-\frac{dW}{dt}$?
|
Since your second question has been answered in the comments, I will answer your first question.
Let's examine in words what $\Delta U=Q+W$ means: "Any change in internal energy arises from a flow of heat into/out of the system and/or work done by/on the system." Put differently: "The only two ways internal energy can change are if heat flow occurs or if work is done." Both heat flow and work are examples of energy transfers (indeed, both have units of energy). Also, note that any energy transfer to the system that is not classified as heat flow is automatically classified as work done on the system.
Putting this all together, we can restate the equation as follows: "In order to change the internal energy of a system, you must add or subtract energy from the system." This is an indirect way of stating that energy is conserved.
It might be easier to see this in the case of an isolated system, where no external heat flow or work occurs, so $Q=W=0$. In that case, the law reads: "In an isolated system, total internal energy does not change." This is a direct statement of conservation of energy.
|
The equation of the first law of Thermodynamics does not directly imply the conservation of energy; rather the first law of Thermodynamics is a consequence of the conservation of energy when applied to Thermodynamics( systems involving heat, temperature etc.)
The conservation of energy was known somewhat by scientists like Galileo in 1638 (knew about potential and kinetic energy conservation in a pendulum) and others before him too. This was before 1850 when Rudolf Clausius and Lord Kelvin stated the First Law of Thermodynamics for energy conservation in systems involving heat energy.
To answer your second question, yes if you take time derivatives on both sides that equation results but that does not mean that the rate of change of internal energy($U$) is zero, it's dependent on how the other factors like rate of heat energy supplied($\frac{dQ}{dt}$) or rate of work done on the system($\frac{dW}{dt}$) changes with time. If both of these are zero or they are equal in magnitude and opposite in sign then only can the rate of change of $U$ be zero. It's a special condition when internal energy does not increase not a condition valid for all times. Ofcourse the conservation of energy can hold even if there is a change in $U$, as then the energy comes from the surroundings to add to the system.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
What is the simplest example of a domain $R$ which is regular (in particular Noetherian) and factorial which admits a finitely generated projective module that is not free?
In fact I'll be at least somewhat happy with any example, since I can't think of one at the moment.
Some brief comments: $R$ needs to have Krull dimension greater than one or else it is a PID. The module in question needs to have rank greater than one, because the hypotheses force
the Picard group to be equal to the divisor class group and the divisor class group to be trivial. And famously, by work of Quillen and Suslin, one cannot take $R$ to be a polynomial ring over a field. Oh yes, and of course $R$ can't be local (or even semilocal, I suppose). I'm already out of ideas...
P.S.: If you can get an easier example by removing the hypothesis of finite generation, I'd be interested in that as well.
|
Depending on what you consider simple, let $k$ be the complex numbers, or the integers, or the field with two elements (or any other commutative ring you're fond of). Let $R=k[a,b,c,x,y,z]/(ax+by+cz-1)$ . Map $R^3$ to $R$ by $(f,g,h)\mapsto xf+yg+zh$ . Let $P$ be the kernel of this map.
$P$ is the universal example of a rank 2 projective module over a $k$ -algebra that becomes free after adding a free rank-one direct summand. (That is, if $A$ is any other $k$ -algebra with such a projective module $Q$ , then there is a map from $R$ to $A$ such that $Q=P\otimes_RA$ .) One could argue that this makes it the simplest example. In particular, Hugh Thomas's example arises from this example in this way.
To see that $P$ is not free, use the fact that Hugh Thomas's example is not free. Alternatively, invoke the much more general result of Mohan Kumar and Nori, which says that if $R=k[x_1,...,x_n,a_1,...a_n]/(\sum x_ia_i-1)$ , then the kernel of the map defined by the $1\times n$ matrix $(x_1^{m_1},\ldots x_n^{m_n})$ cannot be free unless $m_1\ldots m_n$ is divisible by $(n-1)!$ .
If you want an example where the UFD property is obvious, Mohan Kumar's paper "Stably Free Modules" gives a family of examples over rings of the form $A_f$ where the rings $A$ are polynomial rings over fields. These examples all have the property that they become free after adding a free rank-one module.
|
Let $A=\mathbb R[x_0,\ldots,x_n]/(x_0^2+\ldots+x_n^2-1)$ be the coordinate ring of $S^n$. Let $P$ be the kernel of the surjection $A^{n+1}\rightarrow A$ defined by $(x_0,\ldots,x_n)$. If $n\not=1,3,7$, then $P$ is Not free. Clearly $P\oplus A \sim A^{n+1}$. For $P$ not free, note that $P\otimes C(S^n,\mathbb R)$ corresponds to tangent bundle of $S^n$, which is not free, unless $n=1,3,7$. This gives examples of projective $A$-modules of rank $=$ dimension $A$ which is stably free but not free. Note that when rank $P>$ dimension $A$, then $P$ is cancellative (Bass' 1964), i.e. $P\oplus A^t\sim Q\oplus A^t$ implies $P\sim Q$. The above example says that Bass result is best possible. Note that Suslin (~1977) has proved that if $A$ is affine algebra over algebraically closed field, then projective $A$ modules of rank = dimension $A$ are also cancellative. Hence, if we replace $\mathbb R$ by $\mathbb C$ in $A$, then $P$ is free.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
On the Wikipedia page for restricted representations
https://en.wikipedia.org/wiki/Restricted_representation
there is presented a number of explicit "branching rules". In particular, there is the Weyl's branching rule from U(N) to U(N-1) given in terms of signatures $f_1 \geq \cdots \geq f_N$, for $f_i \in \mathbb{N}$, labelling irreps of U(N). I would guess that this generalises directly to the case of branching from $SU(N)$ to $SU(N-1)$ but cannot find a reference. Can someone suggest a reference?
|
The question is answered on page 385 of the classical Zhelobenko book
Compact Lie groups and their representations
for the more general case of $SU(n+m)/SU(n) \times SU(m)$.
|
Maybe the following paper might prove helpful to your question:
Masatoshi Yamazaki, Branching Diagram for Special Unitary Group SU(n), J. Phys. Soc. Jpn. 21, pp. 1829-1832 (1966)
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Judging by the compact regular case, and more generally the spatial case, regular projectivity of locales, resp. regular injectivity of frames, must have something to do with $\neg p\lor\neg\neg p$ and $\neg(x\land y)\to(\neg x\lor\neg y)$ . On the other hand, existence of locales without points shows that the terminal locale is not regular projective. I am convinced somebody should already have found out which locales are regular projective, but who?
The same question about projectivity with respect to arbitrary epimorphisms of locales is probably easier but less interesting. Still, I don't know anything about that either.
PS As Simon Henry pointed out, I should rather pick some pullback stable class of locale epimorphisms. I guess I don't know an answer about any of them (except maybe the proper ones), so please just choose an as large as possible class of nicely behaved epimorphisms of your choice - say, quotients, or of effective descent, or triquotients, etc.
|
So the short answer is that there is no non-empty projective locales for essentially any reasonable class of epimorphisms you can think of (except maybe proper maps).
The problem is that there exists a family of non-trivial Boolean locales $B_\kappa$ indexed by infinite cardinal numbers $\kappa$ , such that the only locale $X$ that has map to all the $B_\kappa$ is the empty locale
The map $B_\kappa \to 1$ are open surjections, so in particular they are effective descent map, stable regular epimorphism and triquotient map, so pretty much any class of epimorphism you might think about will contains these.
But if $X$ was projective with respect to any class containing these cover, then the unique map $X \to 1$ would lift to maps $X \to B_\kappa$ for all $\kappa$ which contradict our claim above.
Explicitely, $B_\kappa$ can be taken to be any (non-trivial) Boolean locale that "collaps the cardinal $\kappa$ to $\omega$ " I mean by that, if $p:B_\kappa \to 1$ denotes the unique map, then $p^*\kappa \simeq p^* \omega$ as sheaves over $B_\kappa$ . For example, $B_\kappa$ can be taken to be the double negation sublocale of the locale of injective functions $\omega \to \kappa$ . The fact that $B_\kappa$ is non trivial then follows from the fact that it is dense in this locale of injective function. (See details in edit below)
And a locale $X$ having functions to all the $B_\kappa$ would collapse all infinite cardinals at the same time (in the sense that all $p^*\kappa$ for infinite cardinal $\kappa$ would be isomorphic), which is impossible as a locale can't collapse to $\omega$ cardinal much larger than itself. THe following are very loose bound that shows it, though experts on forcing surely known much better bounds:
If $X$ is a non-degenerate locales, then the total number of locale section of $p^* \kappa$ is larger than $\kappa$ as every element of $\kappa$ gives a globale section, and it is smaller than the function space $\kappa^{\mathcal{O}(X) \times 2^{\mathcal{O}(X)}}$ as every locale section can be written as: you chose a cover of its domain of definition (a special subset of $\mathcal{O}(X)$ ) and then for each element of that subset, you chose an element of $\kappa$ .
So for any locale $X$ , picking a $\kappa$ to be larger than $\omega^{\mathcal{O}(X) \times 2^{\mathcal{O}(X)}}$ we get that $p^* \omega$ can't have as much section as $p^* \kappa$ and hence they can't be isomorphic, so $X$ can't collaps any cardinal bigger than this to $\omega$ .
Here is some clarification on the construction of locales $B_\kappa$ . This is a fairly standard observation, but I'm strugling to find a reference, so given that it is fairly simple, I'll write the details.
We fixe $\kappa$ some infinite cardinal number.
We start $I_\kappa$ the locale that classifies injection $i:\omega \to \kappa$ , so that a map $X \to I_\kappa$ is the same as the data of an injective map $p^* \omega \to p^* \kappa$ .
It is easy to write a propositional geometric theory of such injections (it has base proposition $R_{x,y}$ for $x \in \omega$ and $y \in \kappa$ which is interpreted as $i(x)=y$ and all the axioms that make this into an "injective" functional relation)
$I_\kappa$ is non-trivial because it has plenty of points.
Now, consider the (open) sublocale $V_y \subset I_\kappa$ for $y \in \kappa$ that clasifies these injection that further satisfies $\exists x \in \omega, i(x)=y$ .
$V_y$ is dense: indeed, the basic open of $I_\kappa$ are the finite intersection of $R_{x,z}$ and for any finite intersection $\cap R_{x_i,z_i}$ of these, if it is non-degenerate you can explicitely construct a point of it that is also in $V_y$ (take a function that send $x_i$ to $z_i$ and some other value to $y$ , if its impossible it means the intersection is empty).
Now the intersection of all $V_y$ is hence a dense sublocale. (an intersection of a familly of dense sublocale is dense).
By definition this intersection classifies bijection from $\omega \to \kappa$ . So this is exactly the $T_\kappa$ I mentioned in the comment.
Alternatively, you can define $B_\kappa$ to be the double negation sublocale of $I_\kappa$ , which is hence included in all the $V_y$ , so that $B_\kappa \subset T_\kappa$ also "collaps the cardinal $\kappa$ to $\omega$ ).
Both $B_\kappa$ and $T_\kappa$ are non-trivial because they are dense in $I_\kappa$ which is non-trivial.
|
I am convinced by the answer of Simon Henry completely. This is just an addendum to it, mainly for myself: I want to look at these $I_\kappa$ , $T_\kappa$ and $B_\kappa$ in as much detail as possible. In fact I will be just more or less repeating parts of what Simon wrote in the language more familiar to me.
So, let first $F_\kappa$ be the free frame on generators $u_{n\alpha}$ , $n\in\omega$ , $\alpha\in\kappa$ . Note that each element of $F_\kappa$ is a join of finite meets of generators $u_{n_1\alpha_1}\land\cdots\land u_{n_m\alpha_m}$ . Then the frame of opens of $I_\kappa$ is the quotient of $F_\kappa$ by the obvious relations which ensure that locale maps $X\to I_\kappa$ are in one-to-one correspondence with families $U_{n\alpha}$ of opens of $X$ satisfying
$$
\begin{aligned}
\bullet\ &\bigvee_\alpha U_{n\alpha}=\top\text{ for all $n$};\\
\bullet\ &U_{n\alpha}\wedge U_{n\beta}=\bot\text{ for all $n$ and all $\alpha\ne\beta$};\\
\bullet\ &U_{m\alpha}\wedge U_{n\alpha}=\bot\text{ for all $m\ne n$ and all $\alpha$}.
\end{aligned}
$$
In particular, each embedding $i:\omega\hookrightarrow\kappa$ determines a map $X\to I_\kappa$ for any $X$ , by declaring $U_{n\,i(n)}=\top$ for all $n$ and $U_{n\alpha}=\bot$ for all $n$ , $\alpha$ with $\alpha\ne i(n)$ . This implies that $u_{n_1\alpha_1}\land\cdots\land u_{n_m\alpha_m}=\bot$ if and only if there are either some $n_i=n_j$ with $\alpha_i\ne\alpha_j$ or some $\alpha_i=\alpha_j$ with $n_i\ne n_j$ .
For each $\alpha\in\kappa$ let $v_\alpha=\bigvee_nu_{n\alpha}$ . Then by what we just said, $v_\alpha\land u_{n_1\alpha_1}\land\cdots\land u_{n_m\alpha_m}\ne\bot$ for any $u_{n_1\alpha_1}\land\cdots\land u_{n_m\alpha_m}\ne\bot$ , hence $\neg v_\alpha=\bot$ . It follows that imposing further relations $v_\alpha=\top$ for all $\alpha$ gives a nontrivial dense sublocale $T_\kappa$ of $I_\kappa$ ; in particular $T_\kappa$ contains $B_\kappa:=I_\kappa^{\neg\neg}$ .
Finally, given any map $X\to B_\kappa$ (or to $T_\kappa$ as well), all these relations imply that in the topos $\operatorname{Sh}(X)$ of sheaves on $X$ , projections of the subobject
$$
\coprod_{(n,\alpha)\in\omega\times\kappa}U_{n\alpha}\rightarrowtail\coprod_{\omega\times\kappa}1
$$
both to $\coprod_\omega1$ and to $\coprod_\kappa1$ are isomorphisms.
And clearly also in fact such maps are in one-to-one correspondence with isomorphisms $\coprod_\omega1\cong\coprod_\kappa1$ in $\operatorname{Sh}(X)$ .
It follows that for each such map, if $X$ is nontrivial then there is an embedding $\kappa\hookrightarrow\hom_{\operatorname{Sh}(X)}(1,\coprod_\omega1)$ . Hence for each nontrivial $X$ there is no such map for $\kappa$ exceeding the cardinality of $\hom_{\operatorname{Sh}(X)}(1,\coprod_\omega1)$ , i. e. of the set of all countable partitions of $X$ into clopens.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Conformal symmetry in QFT has been extremely useful for physics. However, while most of QFT is usually done in momentum space, CFTs are usually studied in position space or in terms of Mellin transformed variables (as opposed to Fourier transformation to go to momentum space). It seems the reasoning must have something to do with existence of massless particles and bad branch cuts in momentum space. But I want to make these statements precise.
So, what exactly is the reason CFTs (especially 2D CFTs) are not studied in momentum space?
There are some relatively new papers in momentum space CFT (e.g., this one ), but still by and large momentum space is usually not the first choice for analysing CFTs.
|
In a standard QFT you study scattering amplitudes. The Fourier transform of the two point function $\langle \phi(x) \phi(0) \rangle$ contains a pole at stable particles $p^2 = m^2$ . This pole is picked up by the LSZ reduction theorem when you study scattering amplitudes.
However, in a CFT, conformal symmetry implies that you don't have these isolated poles in the complex $p^2$ plane. If you try to apply the LSZ reduction theorem to a CFT, your scattering amplitudes will always be $0$ for this reason, as you won't have this pole to cancel out the factor of $(p^2 - m^2)$ .
Thing is, in a CFT, scattering is not the right thing to study. Because you have scaling symmetry, there's no meaningful notion of widely separated wave packets. How can you "widely separate" anything when you can always rescale them close together?
Because of this, the meaningful things to study in CFT take on a very different character than standard QFTs, like QED, where you usually study scattering amplitudes of momentum eigenstates.
|
Another answer points out that scattering amplitudes are not good observables in CFT. This is certainly true. But also in QFT not everything is scattering amplitudes.
One important reason why CFT is usually favoring position space (or Mellin space) is the operator product expansion (OPE). This is the tool that makes CFT so much more tractable than QFT, and it works like a charm in position space: the OPE converges very fast whenever two operators are close to each other.
There is also an OPE in momentum space, but:
it only applies to Wightman functions, not time-ordered products or Euclidean correlators,
its convergence properties are not as nice as in position space (since the question asks specifically about 2D CFT, see arXiv:1912.05550 ).
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
It's about 3 millimetres in length. Found on September 10, 2015.
|
It's a moth fly. They're cute fuzzy little things. Most people consider them pests.
An annoying and troublesome pest that concerns numerous homeowners is the moth fly, also commonly known as the drain fly, filter fly, or sewage fly . Moth flies are frequently found indoors on windows, sinks and walls.
They're often found around sink drains because the eggs and larvae survive on the biofilm in your, well, drains.
|
This IS a Moth Fly (Family Psychodidae ), but the heavily-patterned wings and their posture at rest argue against the genus Psychoda ; the pale spots at the wing margins suggest Clogmia albipunctata (Williston, 1893) which has been extending its northern range limits (half a century ago, this species and Toronto mentioned in the same sentence would have been considered a likely misidentification; this diptera.info thread discusses two localities for that species that are even further north -- Clallam and Grant Counties in Washington).
|
HuggingFaceH4/pmp-stack-exchangedata/biology.stackexchange.com
|
The $k$ -Vertex-Disjoint Paths Problem ( $k$ - $\text{DPP}$ ) is defined as follows:
Input: A graph $G=(V,E)$ and $k$ pairs of vertices $(s_1,t_1),\ldots,(s_k,t_k)$ .
Question: Does there exist $k$ -pairwise vertex-disjoint paths $P_1,\ldots,P_k$ , such that $P_i$ goes from $s_i$ to $t_i$ ?
The problem, for general $k$ , is known to be NP-complete even for planar graphs of max degree 3 .
That said, $2$ - $\text{DPP}$ admits a nearly linear algorithm for general undirected graphs.
Is there anything known for any higher value of $k$ (assuming fixed $k$ value)? What about $k=3$ ?
|
For undirected graph problem admits a FPT algorithm for any fixed $k$. Robertson and Seymour, Graph Minor XIII. Their algorithm runs in time $2^{2^{2^{2^{O(k)}}}} P(n)$ which means for $k = O(\log\log\log\log n)$ is polynomial. But it's not known whether there is better bound for $k$ or not. In special cases like undirected planar case and recently bounded genus case, running time improved.
For directed graph it's NP-Complete even case of 2-disjoint path problem. But in planar graphs is FPT by result of Marx et al. , in graphs of bounded genus is at least in XP (not known whether is FPT or not), in directed acyclic graphs is $W[1]$-hard by result of Slivkins , for tournaments is NP-complete (not known if is FPT or even XP, edge disjoint version admits XP algorithm by result of Seymour et al).
|
Here's what I know.
For undirected graphs, the problem admits a 'nonconstructive' FPT algorithm based on Robertson-Seymour theory. The running time is $f(k) n^3$ where $f$ is a fast-growing function.
For directed graphs, the problem is $\sf{NP}$-complete for any constant $k \geq 3$. In the case of planar directed graphs, it is polynomial for fixed $k$: there is a relatively simple $n^{O(k)}$ algorithm due to Schrijver ('Finding k Disjoint Paths in a Directed Planar Graph', SICOMP) and a more involved FPT algorithm with running time $f(k) poly(n)$ due to Cygan et al., where $f(k)$ is a double-exponential function ('The Planar Directed K-Vertex-Disjoint Paths Problem Is Fixed-Parameter Tractable', FOCS 2013).
Also, if I remember right there are some cases where the edge-disjoint version is easier, but I couldn't recollect the details.
|
HuggingFaceH4/pmp-stack-exchangedata/cstheory.stackexchange.com
|
I have a list of mouse genes, but all our analysis happens for human genes. Is that possible to translate these mouse genes to human genes? and is there any tool that can help me on this?
I am mostly a computer science guy, I am not sure this is even possible. However, according to someone, this could happen, I don't expect a full conversion though.
Thanks.
|
I would recommend using Biomart from Ensembl. This is basically a Swiss Army Knife for converting gene names into various IDs, getting corresponding locations on the chromosomes and so on. You can upload a list of gene names and convert them into the corresponding gene IDs of other species and the convert these IDs back to gene names.
If you haven't used Biomart yet, you can either start by going on their help page or by having a look on this tutorial for the conversion of IDs.
|
Sounds like you are searching for homologs of mouse genes in human genome. BLAST should be able to help you finding that. You might want to find the sequences for the mouse genes on GenBank and run BLAST against it.
However, in this scenario, protein BLAST (BLASTp) will actually helps you more since the protein sequences are conserved better than DNA sequences are between species. In this case, you need to find the proteins coded by the mouse genes, and run BLASTp then.
|
HuggingFaceH4/pmp-stack-exchangedata/biology.stackexchange.com
|
I am reading about Prioritized Experience Replay , and can't understand the following:
On page 4, every transition can be selected from the table with its own probability.
Here is the cumulative density function (if I understood correctly):
$$P(i) = \frac{p_i}{\sum_k{p_k}} $$
where:
$$p_i = \frac{1}{index In Table}$$
Aterwards, the paper says:
For the rank-based variant, we can approximate the cumulative density function with a piecewise
linear function with k segments of equal probability. The segment boundaries can be precomputed
(they change only when N or α change). At runtime, we sample a segment, and then sample uniformly
among the transitions within it.
My question is, why do we have to approximate the density if it can be achieved with the following:
roll a dice between 1 and N (with a dice that is exponentially more likely to roll a '1' rather than a '2', etc)
select an item from index according to the dice.
In c++ we have std::exponential_distribution [source] so there is no need to approximate anything. ...If we maintain our table sorted in a descending order.
|
I suggest taking a look at this page for some more ideas:
Feature Selection
That being said a couple of ideas that come to mind quickly, is to:
use a tree based method (like Random Forest) and look at your feature importances. Scikit Learn has a handy class for doing just that see the link above.
Use some sort of regularization/penalty like L1 or L2 regularization. That will force non-useful features to have parameters close to zero.
Recursively remove variables and see what the resulting output is and cross-validate. Again sklearn has a method for this.
Generally, these methods will be "expensive" as you are fitting multiple models to get you where you need to go.
|
There are many ways to estimate how good a feature is, in predicting y_i. One of the good methods is to build a proper ML model using just the feature you wanna check its importance. In this case, we will build a logistic regression model using only features which you wanna check if it is important or not.
Do remember that if it a Categorical Feature encodes it to vector form based on the model which you are using,
for example (one-hot encoded) works best for linear models and Response Coding works well for tree-based Models.
|
HuggingFaceH4/pmp-stack-exchangedata/datascience.stackexchange.com
|
Let $\mu$ be the Mobius function, $\tau_k(x)$ the number of ways to write $x$ as a product of $k$ natural numbers and $\phi$ the Euler totient function.
I would like to obtain an upper bound for
$$
\sum_{x < X} \frac{\mu^2(x) \tau_k(x)}{\phi(x)}.
$$
In the paper I am reading, this is bounded by
$$
\ll (\log X)^k
$$
without explanation. I would greatly appreciate any explanation. Thank you.
ps I can see that
$$
\sum_{x< X} \frac{\mu^2(x)}{\phi(x)} \ll \log X
$$
so I think I just have to bound $\tau_k(x)$ ...
|
We can obtain an explicit upper bound using the identity (where $p$ is restricted to primes)
$$\frac{n}{\phi(n)}=\prod_{p\mid n}\left(1+\frac{1}{p-1}\right)=\sum_{d\mid n}\frac{\mu^2(d)}{\phi(d)}.$$
For $X\geq 1$ , the above identity implies that
\begin{align*}\sum_{n\leq X}\frac{\tau_k(n)}{\phi(n)}
&=\sum_{n\leq X}\frac{\tau_k(n)}{n}\sum_{d\mid n}\frac{\mu^2(d)}{\phi(d)}\\
&=\sum_{d\leq X}\frac{\mu^2(d)}{\phi(d)}
\sum_{m\leq X/d}\frac{\tau_k(dm)}{dm}\\
&\leq\sum_{d\leq X}\frac{\mu^2(d)\tau_k(d)}{d\phi(d)}
\sum_{m\leq X/d}\frac{\tau_k(m)}{m}\\
&<\left(\sum_{d=1}^\infty\frac{\mu^2(d)\tau_k(d)}{d\phi(d)}\right)
\left(\sum_{m\leq X}\frac{\tau_k(m)}{m}\right).
\end{align*}
On the right hand side,
\begin{align*}\sum_{d=1}^\infty\frac{\mu^2(d)\tau_k(d)}{d\phi(d)}
&=\prod_p\left(1+\frac{k}{p(p-1)}\right)\\
&\leq\prod_p\left(1+\frac{1}{p(p-1)}\right)^k\\
&=\left(\prod_p\frac{1-p^{-6}}{(1-p^{-2})(1-p^{-3})}\right)^k\\
&=\left(\frac{\zeta(2)\zeta(3)}{\zeta(6)}\right)^k,
\end{align*}
while it is straightforward that
$$\sum_{m\leq X}\frac{\tau_k(m)}{m}\leq
\left(\sum_{m\leq X}\frac{1}{m}\right)^k\leq (1+\log X)^k.$$
We conclude that
$$\sum_{n\leq X}\frac{\tau_k(n)}{\phi(n)}<\left(\frac{\zeta(2)\zeta(3)}{\zeta(6)}\right)^k(1+\log X)^k.$$
P.S. Of course many other approaches are available and an asymptotic formula can also be proved. My goal was to give a fully explicit upper bound.
|
One should always consider Rankin's method: for $\varepsilon>0$ ,
$$
\sum_{n\le x} \frac{\mu^2(n)\tau_k(n)}{\phi(n)} \le \sum_{n=1} ^\infty \frac{\mu^2(n)\tau_k(n)}{\phi(n)} \frac{x^\varepsilon}{n^\varepsilon} = x^\varepsilon \prod_p \bigg( 1 + \frac k{(p-1)p^\varepsilon} \bigg),
$$
assuming $\varepsilon$ is chosen so that the series/product converges. To show that the right-hand side is $\ll(\log x)^k$ , it suffices to prove that
$$
\varepsilon \log x + \sum_p \log\bigg( 1 + \frac k{(p-1)p^\varepsilon} \bigg) \le k\log\log x+O(1).
$$
We note that
\begin{align*}
\varepsilon \log x + \sum_p \log\bigg( 1 + \frac k{(p-1)p^\varepsilon} \bigg) &\le \varepsilon \log x + \sum_p \frac k{(p-1)p^\varepsilon} \\
&\le \varepsilon \log x + \sum_p \frac k{p\cdot p^\varepsilon} + O\bigg( \sum_p \frac1{p^2\cdot p^\varepsilon} \bigg),
\end{align*}
and the sum in the error is $O(1)$ uniformly for $\varepsilon\ge0$ ; therefore it suffices to show that
$$
\varepsilon \log x + \sum_p \frac k{p\cdot p^\varepsilon} \le k\log\log x+O(1).
$$
If we choose for example $\varepsilon = 1/\log x$ , the left-hand side is bounded above by
\begin{align*}
1 + &\sum_{p\le x} \frac kp + \sum_{j=0}^\infty \sum_{x^{2^j} < p \le x^{2^{j+1}}} \frac k{p\cdot (x^{2^j})^{1/\log x}} \\
&= k\log\log x+O(1) + k\sum_{j=1}^\infty \frac1{e^{2^j}} \sum_{x^{2^j} < p \le x^{2^{j+1}}} \frac1p \\
&= k\log\log x+O(1) + k\sum_{j=1}^\infty \frac1{e^{2^j}} \bigg( \log\log x^{2^{j+1}} - \log \log x^{2^j} + O\bigg( \frac1{\log x^{2^j}} \bigg) \bigg) \\
&= k\log\log x+O(1) + k\sum_{j=1}^\infty \frac1{e^{2^j}} \bigg( \log 2 + O\bigg( \frac1{2^j\log x} \bigg) \bigg) \\
&= k\log\log x+O(1)
\end{align*}
as needed.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Is there a way to replace a gas (air) for example with another gas (say, helium) in a gas chamber? It does not need to be a complete replacement, but there would need to be a significant amount of the new gas. Thanks!
|
Chemists who need to control which gases are in their vessels (e.g. those who work with air sensitive compounds) do this routinely.
There are two ways to achieve gas replacement. The more reliable way is to work with vacuum-lines that allow switching the input to a vessel between a vacuum and an inert gas. First the air is removed and then the inert gas introduced.
Often, though, this is overkill. In many cases where liquids are involved the liquid needs to be degassed (to remove air dissolved in the liquid). In many cases it is good enough to bubble the inert gas through the liquid and the vessel for long enough to sweep out the unwanted gases. Bubbling the inert gas through the liquid and allowing it to escape the vessel is good enough if done for long enough.
If it isn't important to remove all traces of the original gas, this is usually an effective technique and gets more effective the longer the inert gas is allowed to flow.
|
First you would need to know if the gas is denser than air or not. You would make two holes in the container, one in the top, the other in the bottom. For helium start pumping it in through the top. The air will be pushed out the bottom until the container contains only helium. The denser air will flow through the bottom and the helium remain at the top (to an extent). The container would then need to be resealed to ensure the gas stays. For a dense gas like C0$_2$, you would do the inverse, pumping through the bottom instead.
As a side note, it would take less gas to purge the container if you pumped it slowly, letting the gas settle in layers. Pumping it too fast would cause a turbulent flow, pushing out both gasses at once instead of just the one you wanted to get rid of.
|
HuggingFaceH4/pmp-stack-exchangedata/chemistry.stackexchange.com
|
I am using tensorflow to write simple neural networks for a bit of research and I have had many problems with 'nan' weights while training. I tried many different solutions like changing the optimizer, changing the loss, the data size, etc. but with no avail. Finally, I noticed that a change in the learning rate made an unbelievable difference in my weights.
Using a learning rate of .001 (which I thought was pretty conservative), the minimize function would actually exponentially raise the loss. After one epoch the loss could jump from a number in the thousands to a trillion and then to infinity ('nan'). When I lowered the learning rate to .0001, everything worked fine.
1) Why does a single order of magnitude have such an effect?
2) Why does the minimize function literally perform the opposite of its function and maximize the loss? It seems to me that that shouldn't occur, no matter the learning rate.
|
You might find Chapter 8 of Deep Learning helpful. In it, the authors discuss training of neural network models. It's very intricate, so I'm not surprised you're having difficulties.
One possibility (besides user error) is that your problem is highly ill-conditioned. Gradient descent methods use only the first derivative (gradient) information when computing an update. This can cause problems when the second derivative (the Hessian) is ill-conditioned.
Quoting from the authors:
Some challenges arise even when optimizing convex functions. Of these, the most prominent is ill-conditioning of the Hessian matrix $H$. This is a very general problem in most numerical optimization, convex or otherwise, and is described in more detail in section 4.3.1.
The ill-conditioning problem is generally believed to be present in neural network training problems. Ill-conditioning can manifest by causing SGD to get “stuck” in the sense that even very small steps increase the cost function. [my emphasis added]
The authors provide a simple derivation to show that this can be the case. Using gradient descent, the cost function should change (to second order) by
\begin{equation}
\frac{\varepsilon^2}{2} g^{T} H g - \varepsilon g^{T} g
\end{equation}
where $g$ is the gradient, $H$ is the Hessian, and $\varepsilon$ is the learning rate. Clearly, if the second derivatives are large, then the first term can swamp the second, and the cost function will increase, not decrease. Since the first and second terms scale differently with $\varepsilon$, one way to alleviate this problem is to reduce $\varepsilon$ (although, of course, this can result in learning too slowly).
|
1) Why does a single order of magnitude have such an effect?
2) Why does the minimize function literally perform the opposite of its function and maximize the loss? It seems to me that that shouldn't occur, no matter the learning rate.
There are two main reasons. First one you are not using the same data in the first step than in the second. If in the first step the model learns those values and falls into a local minimum then it is very likely to give a bigger loss for new values.
Second reason is the shape of the cost function. You try to minimize the value by small steps, the length of those steps is given by two factors: the gradient and the learning rate. Image your function is like x^2. If your values are close to 0, the gradient is going to be small than if it further, but if your learning rate is big then, instead of getting closer to 0 you actually increase the error because your new point based on the grading and the learning rate is further to 0 than your previous step. And this can happen several times.
Take a look to this link: http://www.statisticsviews.com/details/feature/5722691/Getting-to-the-Bottom-of-Regression-with-Gradient-Descent.html
If you see the figures with alpha 0.01 and alpha 0.12, you will see how in the first figure the learning rate is small and so the gradient is getting closer to the minimum but in the second case the learning rate is so big that the gradient moves further in every step.
|
HuggingFaceH4/pmp-stack-exchangedata/datascience.stackexchange.com
|
Could a Lorentz engine (like the ones that move the write head inside of hardrives) be used to run car ? Would it be possable to get it producing 60hp, and a decent amount of tourque, if so how large would it have to be / what would its power consumption be ?
|
Every electric motor runs using the Lorentz force, so there is no difference in principle between the motor in the hard drive and an electric car motor. There are commercial 60 hp electric motors that run electric cars, so the answer is yes. It's power consumption is, well, 60 hp. That's 45000 watts, plus a little more for heating the engine and so on, so say an even 50,000 watts. This randomly selected 60hp electric motor is about 18 inches across: (look for 364T in this link ).
|
Yes, such an engine would be possible. For a circular motion you would need to modify the setup a little bit but not significantly.
While Ron is absolutely right that there is no difference in principle a big engineering/economical problem is that an engine with such a design would need very large and powerful permanent magnets. NdFeB magnets in a size enough to create 50kW would be quite heavy and expensive.
A standard electrical motor such as the one shown below from a vacuum cleaner does not require a large amount of rare earth materials. Instead all fields are created by coils wound around a high permeability material (iron alloys or for high end motors alloys with some amount of Nd).
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Why is the Poisson ratio necessary, when volume is conserved? I read that volume is conserved when a body is subjected to longitudinal (compressive or tensile) stress or shear stress, so given that volume is conserved, can we not simply find the change in diameter (and hence the lateral stress) without the Poisson ratio? Is either the Poisson ratio or the conservation of volume only applicable in certain limits? If so which ones? Thanks!
|
We need Poisson's ratio $\sigma$ precisely because the volume is usually not conserved when we stretch, squash or twist something. An exception is ordinary rubber which is, to a reasonable approximation, incompressible, so for rubber $\sigma=1/2$ . For steel it is about $.3$ .
|
Volume is conserved in plastic deformation, so the density of a broken sample after a tensile test should be the same as the initial one.
But during the test, while stressed, there is a (very small) change in the volume. For no change at all the poisson ratio should be 0.5.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I have the data set with many documents of 50 to 100 words each.
I need to clean those data by correcting misspelled words in those documents.
I have an algorithm which predicts possible correct words for misspelled word.
The problem is I need to choose or verify the predictions made by that algorithm in order to clean the spelling errors in the documents.
Can I use all the possible correct words predicted for correct spelling in word vector to perform clustering on those data?
|
It's ok to compute the global performance on the concatenation of the predictions for all the K folds after running the cross-validation process, it depends on the goal and on the metric (for instance the mean accuracy over the folds gives the same result as the global accuracy, but that's not true for every evaluation measure).
But very often the goal involves not only measuring performance accurately but also measuring the variance of the performance across the folds, in order to detect instability. This can't be done from the concatenation of the predictions, so it's often more convenient to keep the folds results separated.
(this is my interpretation, there might be other reasons)
|
I'm not 100% sure what you mean. In k-fold CV, you partition the training set into $k$ subsets of equal size. Holding out one of these folds at a time, you train the model on the remaining $k-1$ folds to make a prediction for the held-out fold. Thus, in the end, you have one prediction for each observation in your training data.
Now, you can compute average accuracy in two equivalent ways: for each fold, compute the average accuracy, then average the k averages. Or, you average accuracy for every single observation. The two values are the same (up to rounding errors), since all you do is take the intermediate step of calculating averages for each fold. The advantage is that you save on memory because you only have to retain $k$ values (average accuracy per fold) instead of $N$ values (one value for each observation in the training set).
|
HuggingFaceH4/pmp-stack-exchangedata/datascience.stackexchange.com
|
Let's assume it is possible for a spacecraft to travel at the speed of light (I've read the interstellar book by Kip Thorne, apparently this is theoretically possible if you swing around two black holes)
I have watched the video where the traveling of a light particle is simulated and while planets are certainly big enough so you can adjust your route, what would happen if a rocket (or spaceship in general) would collide with an earth-like planet (i.e. a planet having a massive surface). Would we burst through the whole planet, make it only to a core, or just a few km? I know what happens when a meteor lands, but obviously they aren't traveling at the speed of light.
And one similar another question - when a spaceship is traveling at the speed of light, is there a problem with collision with smaller objects? I think you can't avoid millions of rockets in outer space, it would surely damage the spaceship, wouldn't it?
|
I don't know about a spaceship, but the XKCD guy wrote an interesting article about what would happen if Earth was hit by a solid asteroid, travelling at various different speeds:
what-if question: Diamond
The ship would be a lot smaller than an asteroid, so I think the damage would be a lesser version of these descriptions. (The most extreme case described is the whole planet being vapourised and even neighbouring planets being affected by the radiation.)
As for the second part of the question - protecting the craft from minor collisions - I admit I have no idea how to work out the kinetic energy from collisions at such high speeds. But even a grain of sand would at least damage the front of the craft, and I guess some sort of thick shield would be needed to protect against erosion from interstellar dust...
Even at normal orbital speeds, Space Shuttle windows have had minor damage as this paper describes...
|
Given Earth's atmosphere, the added drag on the ship would make it nearly impossible to collide with the planet without losing momentum (if we're traveling at 99% the speed of light here). Assuming somehow the ship were to maintain speed, the heat and drag cause by the atmosphere at this speed would tear any known design to pieces, causing it to burn up and be destroyed before colliding with the surface (any pieces that fall to earth would be traveling slower than the speed of light).
All of this aside, estimating gamma at ~70.2 for the speed of light, per kilogram, a collision should release ~5.47 x 10^17 joules (or around 132 megatonnes). To give some perspective, Meteor Crater in Arizona hit with an estimated 10 megatonnes. This sounds like a huge amount of energy (it is) but wouldn't transfer into a spaceship having the mass to vaporize or even melt the earth. The blast that led to the extinction of the dinosaurs was estimated to be ~100 million megatonnes. So if the rocket were around 1 x 10^6 kg then a collision could hypothetically (in this unrealistic scenario) have the same climactic impacts.
|
HuggingFaceH4/pmp-stack-exchangedata/astronomy.stackexchange.com
|
What is the difference between mathematical modeling and statistical modeling?
I only know that a mathematical model is deterministic while a statistical model is stochastic.
Is that all to answer the question?
|
In my mind, statistical modelling is a special case of probabilistic modelling, which is a special case of mathematical modelling.. but I don't usually bother to distinguish them and think often the difference is more of cultures. Things I associate more with statistical modelling are replication, the special role of intuition, and data exploration.
|
What is the difference between mathematics and statistics? !
Statistics is itself a branch of mathematics where most of the time we deal with mean, median, mode etc, although they require mathematical computation as well.
Same way in machine learning statistical models has most of the computation related to mean, median, quantiles etc. Ex- Linear Regression, Logistic Regression.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
I have to decide which language type will result from the union of a type-2 (context-free) and a type-3 (regular) language.
Is there a way or rule to decide this for all language types?
|
I will share below my attempt to prove that you can express every regular language as a regular expression using just the $+$ and $@$ operators. Unfortunately, my attempt at a proof has a gap. Perhaps you can repair the gap. My attempt based upon Brzozoswki's method for generating regular expressions , so I'll assume you are already familiar with how that works.
First, some definitions and other preliminaries.
Definition. Define $A^+$ to be the language of one or more words from $A$, i.e., $A^+ = \{a_1 a_2 \cdots a_k \mid k \ge 1, a_1,\dots,a_k \in A\}$. In other words, $A^+ = A^* \setminus \{\epsilon\}$.
Definition. $\newcommand{\rp}{\backslash\backslash}$ For sets $A,B \subseteq \Sigma^*$, define $A\rp B$ by $A\rp B = (A^*\backslash B) - (A^+ \Sigma^*)$, where $A\backslash B$ is the left quotient , and $-$ represents the difference of two sets. In other words, $A\rp B$ is the smallest set $C \subseteq \Sigma^*$ such that $A^* B = A^* C$ and such that no word in $A^+$ is a prefix of any word in $C$.
Example: $\{01\}\rp \{0101010,101\} = \{0,101\}$.
Note that if $A,B$ are regular languages, then $A\rp B$ is also a regular language and can be computed effectively from $A,B$. (This follows from the standard closure properties of regular languages.)
Definition. The semantics of regular expressions using the $+$ and $@$ operators are defined inductively:
$L(w) = \{w\}$ if $w \in \Sigma^*$
$L(A_1+A_2+\dots+A_k) = L(A_1) \cup L(A_2) \cup \dots \cup L(A_k)$
$L(A^@ B) = \{ab : a \in L(A^*), b \in L(B), b \notin L(A^+ \Sigma^*)\}$
$L(w B) = \{wb : b \in L(B)\}$ if $w \in \Sigma^*$
$L(A^@) = L(A^*)$
$L((A_1+A_2)B) = L(A_1B + A_2 B)$, $L(A(B_1+B_2)) = L(AB_1 + AB_2)$
$L(A_1 A_2 \cdots A_k) = L(A_1 (A_2 \cdots A_k))$
(As a consequence, every regular expression using $+,@$ can be equivalently written as a sum of terms of the form $A_1 A_2 \cdots A_k$ where each $A_i$ is either of the form $w_i$ or $C_i^@$ for some $w_i \in \Sigma^*$ and some regular expression $C_i$ using $+,@$. Once it is rewritten in that sum-of-products form, the semantics of the resulting expression is given by the definition above.)
For instance, we have the following property: if there is no word in $L(A^+)$ that is a prefix of any word in $L(B)$, then $L(A^@ B) = L(A^* B)$.
In what follows, I won't try to distinguish between a regular expression $E$ and its corresponding language $L(E)$.
Now we can prove a generalization of Arden's lemma.
Lemma 1. Let $A,B \subseteq \Sigma^*$ be given, and assume $\epsilon \notin A$ and no word in $A^+$ is a prefix of any word in $B$. Suppose we have the equation $X=AX + B$ (i.e., $X=AX \cup B$). Then the least solution to this equation is $X=A^@ B$.
Proof. Arden's lemma says the least solution is $X=A^* B$. Now, since no word in $A^+$ is a prefix of any word in $B$, in fact $A^* B = A^@ B$, as there can be no ambiguity about how much the $*$ operator gobbles up.
Lemma 2. Let $A,B \subseteq \Sigma^*$ be given, and assume $\epsilon \notin A$. Suppose we have the equation $X=AX + B$ (i.e., $X=AX \cup B$). Then the least solution to this equation is $X=A^@ C$ where $C=(A\rp B)$.
Proof. Arden's lemma says the least solution is $X=A^* B$. As noted above, $A^* B = A^* C$. Moreover, no word in $A^+$ is a prefix of any word in $C$, so $A^* C = A^@ C$.
Now given any regular language $L$, we can apply Brzozoswki's method to it, but using Lemma 2 above instead of Arden's lemma. It's tempting to hope that the result will be a regular expression for $L$ that uses only the operators $+$ and $@$, but unfortunately there's a gap: I don't know whether one can prove that $A\rp B$ can be represented as a regular expression using only the $+$ and $@$ operators. So, this proof method has a big gaping hole in it.
But perhaps someone will see a way to build on this and prove the desired result.
|
I believe I can prove that:
a RE using only possessive operators is equivalent to a RE without possessive operators.
any RE can be rewritten to an equivalent RE that uses only possessive operators.
For two expressions $A$ and $B$ to be equivalent means that they define the same language:
$$A\equiv B \implies L(A) = L(B)$$
Let's mark with " $\hat{\ }$ " the possessive operators.
I will use regular expression extended with the $\cap$ and $\neg$ operators.
1. Possessive Alternation
$$ A\ \hat{|}\ B \equiv A \ |\ (B \cap \neg A) $$
Which express the fact that if $A$ matches, $B$ is not even tried. In fact, if $A$ fails the only elements of $B$ that can match are those that are not in $B\cap A$ , otherwise $A$ would have matched.
Actually, it's easy to see that:
$$ A\ \hat{|}B\ \equiv A\ |\ B $$
because any string $s\in B$ will match $A\ |\ (B\cap \neg A)$ , either because it is in $A\cap B$ (and hence in $A$ ) or because it's in $(B\cap \neg A)$ . This means that it's not important to say if $|$ is possesive or not.
2. Possessive Repetition
$$ A \hat{*} B \equiv A * (B \cap \neg A) $$
In words: " $A\hat{*}B$ matches a, possibly empty, sequence of $A$ followed by an element of $B$ which is not in $A$ ".
The reason is that $A\hat{*}$ will consume any possible element of $L(A)$ , including those element of $L(B)$ wich are in $L(B\cap A)$ .
It's easy to see that: $A\cap B = \emptyset \implies A\hat{*}B \equiv A * B$
and that: $B\subset A \implies A\hat{*}B = \emptyset $
The last expressions explains why an expression like .@x can never match.
It is also clear that at the end of an expression $A\hat{*}\equiv A*$ .
As usual, $A\hat{+} \equiv A A\hat{*}$
3. Non-possessive Repetition
We can also express the "greedy non-possessive repetition" using only possessive operators:
$$A * B = A\hat{*} B\ |\ ((A\cap \neg B)\;\hat{*}\;(B \cap A))\hat{+} $$
The first term of the alterantion will match any sequence of $A$ followed by a $B$ that do not match $A$ , the second will match the longest sequence of $A$ that end with a $B$ which is also in $A$ (otherwise it would have matched in the first alternation).
The non greedy version of $*$ (let's denote it with $*^?$ ) would be:
$$A {*^?} B = A\hat{*} B\ |\ (A\cap \neg B)\;\hat{*}\;(B \cap A) $$
4. Optional Match
Reasoning in the same way, we can see that:
$$
A \hat{?} B = A B \ |\ (B \cap \neg A) \\
A ? B = A \hat{?} B \ |\ (B \cap A)
$$
5. Conclusions
Considering point 1-4 above, and reasoning by induction, we can prove that we can define any possible regular language using only possessive operators. Which answers my question.
6. Open points
Formality I'm well aware that what I wrote above is not a formal proof. I'm sure the reasoning is correct but a formal proof could highlight further details or additional hypotesis that are needed and I might have missed. Also, the properties of the possessive operators should be formalized (are they associative, left distributive, ...?).
Practicality Since I've used $\cap$ and $\neg$ , it is left to determine if it is practical to use only possessive operators. There might be some RE that would become extremely complex to write with only possessive operators.
Speed Since there will be no ambiguities and the only source of backtracking is the alternate operator. I'm pretty sure that this means that the match can be done in linear time but it should be proven.
|
HuggingFaceH4/pmp-stack-exchangedata/cs.stackexchange.com
|
This might be a question in general: due to computational burden, I have to use a subset of my complete data (say, 1,000 out of the complete 10,000 observations) to get a p-value of a test. The test itself is from Monte Carlo simulations. My question is, is there a way to quantify the uncertainty of the p-value due to the use of a subset of the 1,000 observations instead of using the complete dataset? Thanks!
|
For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you entertain putting into the model. The rationale for that "rule" is that it results in a model performance metric that is likely to be as good or as bad in new data as it appears to be in the training data. But you need 96 observations just to estimate the intercept so that the overall predicted risk is within a $\pm 0.1$ margin of error of the true risk with 0.95 confidence.
|
Having too many parameters compared to observations may lead to overfitting. Various adjustments or measures can be used to correct for this. AIC for example accounts for both the number of variables and the number of observations in your dataset and is probably most often used. AIC itself doesn't adjust the model, but serves as a tool to select the best model if you construct multiple ones. It's basically a tradeoff between residual error and model complexity.
You can furthermore take a look at other "information criteria" or more advanced techniques like crossvalidation, penalized logistic regression ("penalized" package in R), ...
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
For my research I am interested in the transmission characteristics between a transmitter (Tx) and a receiver (Rx) situated in a circular room. In particular, it is important for me to know the number of paths a ray can take such that it reflects exactly once off the walls of the room.
Since reflections occur such that the incident ray has the same angle relative to the normal as the reflected ray, I tried to use vectors to attack the problem but the math became very unwieldy.
Empirically, I have found that depending on the situation of the transmitter and receiver, there could be 2, 3, or 4 paths—no more, no less. There is an exceptional case where the transmitter and receiver are co-located at the centre, in which case there are infinitely many paths.
Can my experimental result be validated (or denied) analytically?
|
We use complex numbers to prove that there are at most $4$ such points unless both transmitter and receiver are at the center.
Identify the circular room with the unit circle $|z|=1$ in the complex plane,
and let $r$ and $t$ be the complex numbers corresponding to the
receiver and transmitter, with $|r|<1$ and $|t|<1$ .
If $z$ is a point of reflection then the condition $\theta_r = \theta_t$
comes down to $(z-r)(z-t)$ being a real multiple of $z^2$ ;
that is, to the ratio $(z-r)(z-t)/z^2$ being a real number.
Write
$$
(z-r)(z-t) / z^2 = 1 - (r+t) z^{-1} + rt z^{-2},
$$
and note that a complex number $w$ is real
if and only if it equals its own complex conjugate $\overline w$ .
Since $z$ is on the unit circle, $\overline z = z^{-1}$ ,
so our condition is
$$
\overline{rt} z^2 - (\overline r + \overline t) z
+ (r+t) z^{-1} - rt z^{-2} = 0.
$$
Multiplying by $z^2$ yields a polynomial of degree $4$ in $z$ .
Thus there are at most $4$ solutions, even without the condition $|z|=1$ ,
unless the polynomial vanishes identically, in which case every $z$
is a solution. But the polynomial vanishes identically
if and only if $r+t = rt = 0$ , which is to say $r=t=0$ ,
so we recover the degenerate case where receiver and transmitter
are both in the center of the circular room.
|
Here is a plot of a grid of lines coming out of the transmitter centred at $(0,-0.5)$ in green, together with the first reflection lines, in yellow. There is 300 emission lines. From the picture you can see a big region where there appears to be two yellow lines through each point, then a significantly smaller region where there are four. The boundary of this region presumably is where all the triple intersections are, although I'm not certain what's happening at those three cusp points.
And here is a more dramatic image, with the emitter at $(0,-0.8)$ .
On closer inspection I think that boundary curve consists of points with three yellow lines intersecting. i.e. there's nothing unexpected going on here.
Regarding giving a detailed proof, I think there is a reasonable way to go about this. If we call the emission point $p$ , and the first impact-point on the boundary circle $q$ , then the 2nd impact-point on the boundary circle we will call $f(p,q)$ .
If $q$ has angle $\theta$ then $f(p,q)$ has angle $\theta + \pi + \delta_p(\theta)$ The nice thing about the function $\delta_p : \mathbb R \to \mathbb R$ is it is $2\pi$ -periodic and continuous. When $p=0$ $\delta_p$ is the zero function. When $p$ approaches a point on the boundary of the circle, $\delta_p$ is approximating a sawtooth function -- a sawtooth function with period $2\pi$ . Here are a few $\delta_p$ plots, below.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
I just learned about Gibbs Sampling which is an MCMC method. Given a distribution $\pi$, we want to sample an item according to $\pi$.
Maybe my alternative suggestion would sound somewhat naive (even stupid) but why can't we just draw a number in random from $[0,M]$ for some sufficiently large enough $M$. Then, we divide the range to buckets with appropriate sizes according to the distribution.
This will be a true sampling of $\pi$.
One could argue that my suggestion demands a PRNG, but Gibbs Sampling uses randomness too when deciding the next state from the neighbors of the current state.
So for a reasonable distribution, wouldn't my suggestion work way better? It's essentially $O(1)$ and accurate.
|
The approach you described sounds like the common algorithms for sampling. If by reasonable distribution, you mean a smallish finite discrete distribution, then see the following references for how to do that. You would be right that Gibbs sampling would be a worse choice, probably, when these methods apply.
https://stats.stackexchange.com/questions/26858/how-to-generate-numbers-based-on-an-arbitrary-discrete-distribution
https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/
For many problems it is computationally infeasible (or impossible in the case of infinite state spaces) to enumerate all possibilities, nor even choose a sufficiently large M. (The M you describe is actually the partition function, i.e., the sum of all the un-normalized probabilities of all events). For instance, imagine sampling from an Ising model, which has 2^N states, where N is the number of binary variables.
|
It's not "essentially $O(1)$" to draw objects from a set with non-uniform probability: your bucketing scheme takes more than constant time.
Further, sampling from a Markov chain allows you to sample without having to construct the probability distribution explicitly or even the state space, explicitly. For example, how do you propose to randomly sample matchings in a graph by your method, even uniformly? You'd need to first construct the set of all matchings (of which there can be exponentially many), then select one. And note that constructing the set of matchings implies being able to count them, which is #P -hard, so very unlikely to be doable efficiently. With a Gibbs sampler, you just run a Markov chain for polynomially many steps and you have yourself a matching chosen uniformly at random.
|
HuggingFaceH4/pmp-stack-exchangedata/cs.stackexchange.com
|
For one of my experimental setup I need to place a mirror perfectly parallel to a wall. It can be placed at any distance from the wall. I would like to use any method other than direct measurement. I am free to use the following:
a webcam
a secondry mirror
Edit: It's not necessary to use all or either of them.
|
Forget the webcam.
attach the secondary mirror to the wall, at a height that is near the height of the center of the primary mirror.
Then adjust the horizontal and vertical tilt of the primary mirror to center the multiple images of the secondary mirror within each other on the primary mirror.
If you have a cheap laser pointer and a carpenter's square, you can set up the laser pointer so that it is exactly square to the wall (vertically and horizontally) and then adjust the primary mirror such that the laser beam goes back to the laser.
|
Fasten the secondary mirror to the wall, facing out, and support the webcam centered in front of it, looking into it. Turn on the camera. Rotate the webcam up and down, left and right, until it's image of itself in the secondary mirror is centered. Put the other mirror behind the webcam: you will see what looks like a great many images of the camera if the mirrors are not parallel to each other. Try it: you should see right away what you have to do to make the mirrors parallel to each other (and to the wall).
You said the mirror can be any distance to the wall, so an alternative method is to set that distance to zero.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
The Moon has 27.3% of the Earth's radius/diameter but only 2% of the Earth's volume. I don't quite get it why the Moon's volume is that small despite having more than a quarter the 2-dimensional size of Earth. Compared to the Earth, does the Moon really have 2% of the Earth's 3-dimensional size? Why is it like this with the Moon and other bodies of similar diameters?
|
...despite having more than a quarter the 2-dimensional size of Earth.
I think herein lies the problem; diameter is a 1-dimensional measurement, it's units are distance.
Let's rewrite 27.3% as 0.273. If that's the ratio of diameters, then the ratio of 2-dimensional areas should be (0.273) 2 and the ratio of the volumes should be (0.273) 3 . Those numbers are 0.0745 and 0.0203 respectively.
So the next question is why with a volume of 2% of Earth that The Moon's mass is only 1.2% of Earth's !? That's because the Moon's average density is only 3.3 g/cm^3 compared to 5.5 g/cm^3 for Earth.
For more on that see answer(s) to Are there any known asteroids with average density similar to that of Earth's?
Visual cues are deceiving, these may help. Note that the larger beaker with the red fluid is filled to 600 ml, and the smaller one with blue fluid is only 60 ml, it looks like it's double in size but it's roughly 10x larger in volume!
Sources: above: https://en.wikipedia.org/wiki/File:Beakers.jpg and below: Why does the Moon appear gray when passing between the Sun and the Earth?
|
It is all just geometry and mathematics.
The volume of a sphere is calculated according to this formula:
Volume = $(4/3) \times \pi \times r^3$
where $\pi$ = 3.14159..., and $r$ is the radius of the sphere.
The Earth radius is 6,371 km .
The Moon radius is 1,737 km .
We put the numbers into the formula and we get:
• The volume of Earth is 1,083,206,916,845 .7535 km³ (one trillion eighty-three billion bla-bla-bla cubic kilometers)
• The volume of Moon is 21,952,706,175 .030006 km³ (twenty-one billion nine hundred and fifty-two million bla-bla-bla cubic kilometers, which is approximately twenty-two billion cubic kilometers)
Since one trillion is one thousand billion, you don't even need a calculator to understand that one trillion is roughly 50 times bigger than 21 billion , and one fiftieth (1/50) is exactly 2% .
Here is a picture of Earth and Moon shown as balls lying close to each other. You can use a ruler and see that the picture is true to life, the diameter of the smaller ball is really 27.3% (about 1/4) of the diameter of the bigger one:
|
HuggingFaceH4/pmp-stack-exchangedata/astronomy.stackexchange.com
|
I came across this question in a review of an old exam I took. I didn't get the answer correctly then, and I'm struggling to figure the answer out now. Can anyone help me reason through this?
Prove or Disprove that if $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$ . We may not assume independence .
Here is what I attempted:
I figured I might be able to approach this by proving this through contradiction. I started by assuming $P(X<Y)=0$ . Then,
\begin{eqnarray*}
F_{X}(z)=P(X\le z) & = & P(X\le z,X<Y)+P(X\le z,X\ge Y)\\
& = & 0+P(X\le z,X\ge Y)
\end{eqnarray*}
Can anyone help from here?
|
Firstly, it is worth noting that the antecedent condition in your conjecture is a slightly stronger version of the condition for strict first-order stochastic dominance (FSD) $X \ll Y$ , so it implies this stochastic dominance relationship. This condition is much stronger than what you actually need to get the result in the conjecture, so I will give you a proof for a stronger result (same implication but with a weaker antecedent condition). Your chosen method of proof is a good one, and you are almost there - just one more step to go!
Theorem: If $F_X(z) > F_Y(z)$ for some $z \in \mathbb{R}$ then $\mathbb{P}(X<Y) > 0$ .
Proof: We will proceed using a proof-by-contradiction. Contrary to the result in the theorem, suppose that $\mathbb{P}(X<Y)=0$ . Then for all $z \in \mathbb{R}$ you have:
$$\begin{equation} \begin{aligned}
F_X(z) = \mathbb{P}(X \leqslant z)
&= \mathbb{P}(X \leqslant z, X < Y) + \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&= 0 + \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&= \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&\leqslant \mathbb{P}(Y \leqslant z) = F_Y(z), \\[6pt]
\end{aligned} \end{equation}$$
which contradicts the antecedent condition for the theorem. This establishes the theorem by contradiction. $\blacksquare$
|
Under the assumption that $X$ and $Y$ are independent and continuous,
\begin{align*}\Bbb P(X<Y)&=\Bbb E^Y[\Bbb I_{X<Y}\mid Y]\\ &=\Bbb E^Y[F_X(Y)]\\&>\Bbb E^Y[F_Y(Y)]\\ &=\int_{\Bbb R} F_Y(y) \, \text{d}F_Y(y) \\&= \frac{1}{2} \int_{\Bbb R} \, \text{d}F_Y^2(y)\\&=\frac{1}{2}F_Y^2(\infty)-\frac{1}{2}F_Y^2(-\infty)\\&=1/2\end{align*}
Further,
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=\int_{\Bbb R} \Bbb P(Y'<y) \,\text{d}F_Y(y)$$
when $Y'\sim F_Y(\cdot)$ , or
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=\Bbb P(Y'<Y)$$
when $Y,Y'\stackrel{\text{iid}}{\sim} F_Y(\cdot)$ , implying
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=1/2$$
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
Recently I found out the Applications of Quantum Computing Professional Certificate Program that MITxPRO is offering for people interested in quantum computing. I saw that it is consisted of four courses that can be done independently or as a whole program. This is the link for the course.
I am especially interested in just the last one of such four courses, but I do not know if it would be necessary to take the other ones so that I could do such course.
That's why I was wondering if someone here has started this course, and so if there is someone, if he could give an insight about which is the level required for taking such courses, the time required in order to complete the homework and his opinion about the course in general. Also it would be interesting to hear if you think that taking all courses should be necessary (although I am aware of the fact that just one of the courses has been given, so this would be a subjective opininon).
|
I signed up for this series because I was interested in the 2nd and 3rd courses.
There are a lot of students from different backgrounds so I think that limits the depth of what the instructors can cover. The introductory course was too easy in terms of content, however useful in the form of industry perspectives and getting to know 'who is doing what' in hardware. My fear is that the remaining courses will be a bit too simple/general.
The bulk of the time is spent watching videos. I set the speed to 1.25x or else it's just a bit too slow for me. You could complete the entire course in a weekend.
Taking all the courses is absolutely not necessary but you do get a nice certificate at the end.
Oct 31, 2018 Update
I've finished all 4 courses and have to say the 2nd, 3rd, and 4th courses were great. They went into a reasonable amount of depth in the topics. I'd recommend the series to anyone starting out. If you're already familiar with the basics then maybe skip the first course.
Jan 2, 2020 Update
Since I received a few upvotes on this answer recently, I thought I would add a bit more information. The 4 course certificate program has since been split into two two-course programs. Quantum Computing Fundamentals and Quantum Computing Realities. My comments above still stand. Skip the fundamentals course if you're already familiar with the basics.
|
Content wise very similar course but different name Quantum Information Science I (three parts - part 1, 2, and 3) and Quantum Information Science II were $49 per course as verified certified learning outcome - a series of 3 courses + 1 more extra course on edX. Now the course has been taken down; at least no more new enrollments, and MITx Pro is offering it for \$2250 + \$2250 = \$4500. This is 20 times higher for the same course.
I see there would be a number of other MOOCs coming soon. One such is from St Petersberg State University - Introduction to Quantum Computing, but the English version is very poor in quality of content compared to the Russian version on Coursera. However, there is a course from TU Delft on edX, and also one from Keio University at FutureLearn. These options could be looked at; same content but at lower price.
PS: I am coming up with a course on Udemy and also maybe on Coursera with IBM Qiskit open network free version. Stay tuned for it.
Edit-2: Adding edX course links cited above.
Quantum Information Science I
1.1) Quantum Information Science I, Part 1 https://www.edx.org/course/quantum-information-science-i-part-1
1.2) Quantum Information Science I, Part 2 https://www.edx.org/course/quantum-information-science-i-part-2
1.3) [Quantum Information Science I, Part 3 https://www.edx.org/course/quantum-information-science-i-part-3
Quantum Information Science II: Advanced quantum algorithms and information theory
https://www.edx.org/course/quantum-information-science-ii-advanced-quantum-al
|
HuggingFaceH4/pmp-stack-exchangedata/quantumcomputing.stackexchange.com
|
I have a homework assignment question on accretions discs (essentially an estimation of the number of electron scatterings, but this is just for background).
There are a few parameters, one of them being $L$, which is the linear size of the medium (the medium in this case being an accretion disc around a blackhole)
Now, I have been given the mass of the black hole. Other than that, nothing else which could give me the linear size of the accretion disc.
Could I assume that the linear extent of the accretion disc is perhaps of the order of a few Schwarzschild radii? Which could be calculated from the mass, which is given.
If anyone could shed some light on this I would be very appreciative. I need a nudge in the right direction on this.
|
Millennia ago, when only a few thousand stars, a handful of planets and nebulae, and some transient objects like supernovae and comets, were known, people usually named these objects after gods and heros, but also after everyday objects. Stellar constellations that vaguely resembled something known, was named after this. Mars is red, so it was named after the god of war; Mercury, being the fastest to complete one cycle around the Sun, was named the god of messaging and traveling.
With the advent of telescopes, so many objects are now known that we had to come up with a more pragmatic convention, so usually an object is named according to its place in some catalogue, its type, and/or it coordianates on the sky. For instance, a quasar detected at the " J2000 " coordinates $\{\mathrm{dec.},\mathrm{RA}\} = \{22^\circ\, 22'\, 56'', -09^\mathrm{h}\, 46^\mathrm{m}\, 36^\mathrm{s}\}$ may be called J2222–0946. An x-ray source observed in the constellation Cygnus may be called Cyg X-1, while its binary counterpart is number 226,868 in the Henry Draper Extension Catalogue and is thus named "HDE 226868" (not to be confused with one of our site moderators).
Oftentimes, an object appears in multiple catalogues, and will thus have multiple names. For instance, a Lyman $\alpha$ -emitting galaxy appears in a catalogue of Ly $\alpha$ emitters, but due to its infrared properties, it will also appear in some catalogue over dusty galaxies (though Ly $\alpha$ emitters tend to contain little dust).
|
Names are only what we agree to call things. Astronomers have a practical need for names, and a romantic sense of discovery being confirmed by the naming. So Astronomers have cooperated to agree on names for astronomical objects.
In 1919 several astronomical associations merged to form the International Astronomers Union (IAU), and it is now the umbrella body to which professional astronomers and national scientific academies belong. As such what the IAU calls things is as close to "official" names as we can get. It is the IAU who set standards and guidelines for the naming of new asteroids, Trans Neptunian Objects, craters on Pluto, exoplanets, etc.
The procedure for naming is roughly: Following discovery the IAU checks that the object really exists, and which astronomer actually discovered it first (sometimes several people report a new object at about the same time, the IAU sorts out priority). The IAU then asks the discoverer to propose a name. The name must be unique, and some objects are named systematically: for example moons of Pluto are named after gods from underworld mythology. The IAU checks this and then confirms the name, if everything is okay.
Stars aren't named by the IAU, with the exception of those that already had names (mostly arabic) from ancient times. There are so many stars that naming them individually would be impractical, and they just get catalogue numbers. Most stars have many catalogue designations each from a different catalogue.
Some objects do get named by general adoption by the media, for example "Tabby's star" has not been designated as such by the IAU, but if people continue to call KIC 8462852 by that name, it may enter general use.
|
HuggingFaceH4/pmp-stack-exchangedata/astronomy.stackexchange.com
|
I would like to calculate the agreement between 2 or more raters on all the judgments from a set, but also for each judgment.
For example:
rater1, rater2
aa, aa
bb, bb
cc, cd
I can use Cohen's Kappa or Fleiss Kappa to calculate the agreement for all the items, but can't find a way to calculate the agreement for each judgment. What R method should I use?
If I simply try to use Fleiss Kappa for the first row of the file ('aa', 'aa') I will get the following:
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 1
Raters = 2
Kappa = NaN
z = NaN
p-value = NaN
I would really need a function to do this agreement for each judgment.
|
You can calculate the percent observed agreement for a single item:
$$
A=\sum_{k=1}^q \frac{r_k(r_k-1)}{r(r-1)}
$$
Where $q$ is the total number of categories, $r$ is the total number of raters that assigned the item to any category, and $r_k$ is the number of raters that assigned the item to category $k$.
|
Instead of calculating rater's agreement for each judgment, I calculated rater's agreement for all the judgments that refer to the same subject, therefore I ended up with:
Fleiss Kappa for 2 raters
Subjects = 3
Raters = 2
There seems to be no Kappa or Fleiss Kappa that would give us the agreement for just 2 raters and 1 subject.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
Are known expressions for total variation distance between $N(0,\sigma^2)$ and $N(0,\sigma^2+\epsilon)$ for small $\epsilon$? The only thing I seem to find is things are expression about the mean but not if we change variance slightly.
|
There's also a softer argument based on properties of the heat kernel, which applies in higher dimensions as well, in Lemma 4.9 of this paper of Klartag . It shows that in $n$ dimensions, the total variation distance between centered Gaussian distributions with covariances $\alpha I_n$ and $\beta I_n$ is at most
$$
C \sqrt{n} \left|\frac{\beta}{\alpha} - 1\right|,
$$
where $C > 0$ is an absolute constant (which can be made explicit if you want).
|
For two measures with densities f,g, the total variation distance is
$$
\int_{f>g}(f(x)-g(x))\,dx
$$
For two Gaussian measures with the same mean and different variances it is easy to identify the set $\{f>g\}$ and to obtain the formula for the distance in terms of the partition function of the standard Gaussian measure
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
In the famous Goldstein mechanics book, there is an example about a single (non-relativistic) particle of mass m and charge q moving in an E&M field.
It says the force on the charge can be derived from the following velocity-dependent potential energy
$$U=q\phi-q\mathbf{A}\cdot\mathbf{v} .\tag{1.62}$$
(eq 1.62 of 3rd ed.)
I can see where the expression came from my E&M knowledge. So far it's OK. Next
$$L=T-V=\frac{1}{2}mv^2-q\phi+q\mathbf{A}\cdot\mathbf{v}.$$
(p.341) (It changed notation from $U$ to $V$ without mention.)
It says that because of the $q\mathbf{A}\cdot\mathbf{v}$ term in $V$, the Hamiltonian is not $T+V$. However, it says it's still total energy since the "potential" energy in an E&M filed is determined by $\phi$ alone.
I'm confused by the sentence. Is it insisting that potential energy is only $V=\phi$? Then why it introduced velocity-dependent potential earlier?
What's the role of $q\mathbf{A}\cdot\mathbf{v}$ term?
|
Firstly, Goldstein uses the letters $V$ and $U$ for velocity-independent and velocity-dependent potentials, respectively, as explained in the beginning of section 1.5,
Both the 2nd edition (p. 346) & the 3rd edition (p. 341) wrongly state that the Lagrangian for a point charge in an E&M field is
$$L~=~T-V$$
rather than
$$L~=~T-U. $$
It seems that Goldstein forgets his own notation convention from Section 1.5!
The 2nd edition states (p. 346)
Because of this linear term in $U$, the Hamiltonian is not $T+U$.
While the 3rd edition states (p. 342)
Because of this linear term in $V$, the Hamiltonian is not $T+V$.
The 2nd edition is here correct, while the 3rd edition is wrong, as the Hamiltonian $H$ is indeed the sum of the kinetic energy $T$ and the electric potential energy $V=q\phi$. It seems that the initial error in the 2nd edition caused a new error in the 3rd edition!
References:
H. Goldstein, Classical Mechanics, 2nd edition, p. 346.
H. Goldstein, Classical Mechanics, 3rd edition, p. 341-342.
|
I think Goldstein made a mistake (or is at least being misleading).
The Hamiltonian for a charged particle in an electromagnetic field is
$$
H=\frac{1}{2m}(\vec{p}-q\vec{A})^2+q\phi(\vec x)
$$
We also know, from the canonical transformation, that the canonical momentum is given by $\vec{p}=m\dot{\vec{x}}+q\vec{A}$. So in fact, the Hamiltonian is nothing more than
$$
H=\frac{1}{2}m\dot x^2+q\phi(\vec{x})
$$
written in terms of the canonical momentum $\vec{p}$. Thus, not only is the Hamiltonian the total energy of the particle, but it is in fact exactly $T+V$.
What I think Goldstein is referring to is that earlier, in chapter 1, he described the Lagrangian for a charged particle as arising from a "velocity dependent potential energy" $U$, from which he could write $L=T-U$. This $U$ is NOT a real potential energy, but it makes the Lagrangian work out. In terms of this $U$, he is saying that we cannot write $H=T+U$, where $U$ here is an artificial "velocity dependent potential energy." But we emphatically CAN write $H=T+V$, where $V$ is the boring regular potential energy of a particle in an electric field.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Setting
Let $I\subseteq\mathbb C[x_0,\ldots,x_n]=:S$ be a homogeneous ideal and $X\subseteq\mathbb P^n$ the scheme defined by $I$. Consider the action of the symmetric group $\mathfrak S_{n+1}$ on $S$ by permuting the variables. Assume that $I$ is invariant under under some subgroup $G\subseteq\mathfrak S_{n+1}$. Assume furthermore that $G$ acts transitively and freely on the irreducible components of $X$. In other words, all irreducible components of $X$ are isomorphic and we have one component for each permutation in $G$. You can obtain something like this by picking any (possibly open) point of $\mathbb P^n$ which has trivial stabilizer in $G$ and taking (the closure of) its $G$-orbit.
Question
I am in such a situation and I want to figure out whether each component of $X$ is nonsingular. I thought it might be a good idea to consider the quotient $X/G$, which is defined as $\operatorname{Proj}(S^G/I^G)$. Here, I denote by $S^G := \{ f\in S \mid G.f=\{f\}\}$ the $G$-invariants in $S$.
My question is whether the following is true:
The irreducible components of $X$ are nonsingular if and only if $X/G$ is nonsingular.
Thoughts so far
My intuition tells me that $X/G$ should be isomorphic to each component of $X$ (in the general case, also including its embedded points). If $X$ is normal, then this is easily true: Restricting the projection $\pi:X\twoheadrightarrow X/G$ to any component of $X$ yields a surjective morphism between normal varieties whose fibers generically contain one element, so this morphism is bijective. Because it maps between normal varieties, it is an isomorphism. I am not sure how to treat this in the general case, though.
I went through the examples $I=(x,yz)$ and $I=(x^2,xz,yx,yz)$ in $\Bbb C[x,y,z]$ with $G$ generated by the transposition of $y$ and $z$ only. It behaves as I expected, but I gained no insights.
|
Here is an example where $X/G$ is singular and the components of $X$ are smooth.
In $\mathbb{P}^3$ with coordinates $x,y,z,w$, consider the smooth conics
$$\begin{array}{lll}X_1:& x^2-y^2=yw,&z=w\\ X_2:& z^2-w^2=yw,&x=y.\end{array}$$
They are exchanged by the involution $\sigma:(x:y:z:w)\mapsto(z:w:x:y)$ and meet only at the points $(0:0:1:1)$ and $(1:1:0:0)$ which are themselves exchanged by $\sigma$. In particular, $\sigma$ acts freely on $X:=X_1\cup X_2$, so the projection $X\to X/\langle\sigma\rangle$ is étale. Hence $X/\langle\sigma\rangle$ is singular (because $X$ is; in fact $X/\langle\sigma\rangle$ is a rational curve with one node) while $X_1$ and $X_2$ are smooth.
|
You apparently mean the action on the set of components is free, not just faithful.
For $C$ a component of $X$, the composite map $C \to X \to X//G$ is a finite map, and (by the freeness assumption) birational. From there you need only that $X//G$ is normal to infer that this map is an isomorphism. (You don't need any assumption on $C$ nor do you need smoothness of $X//G$.)
I'm trying to involve the square $$\begin{matrix}
C \times G &\to& X \\ \downarrow &&\downarrow \\ C &\to& X//G\end{matrix}$$
where down-arrows divide by the $G$-action on the right (as indicated by the notation $X//G$) and right-arrows by the diagonal interior action $(c,g)\sim (ch, h^{-1}g)$. Your questions should be about comparing the two horizontal (birational) maps.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
I'm solving an exercise about the Lagrange-Euler equations, that states the following:
Let $\gamma (t) = \{ (t,q) : q = q(t), t_0 \leq t \leq t_1\}$ be a curve in $\mathbb{R} \times \mathbb{R}^2$ . Further let $F(q,\dot{q},t)$ be the function from $\mathbb{R}^2 \times \mathbb{R}^2 \times \mathbb{R} \rightarrow \mathbb{R}$ for which the functional $\Phi = \int_{t_0}^{t_1} F(q,\dot{q},t) dt$ is the length of the curve.
(a) Which is the form of $\Phi$ in cartesian coordinates? Which is its form in polar coordinates?
(b) Give the Euler-Lagrange equations in both coordinate systems.
(c) Solve the differential equations in both coordinate systems and show that the solutions are the same.
Now, my problem begins with giving the form of $\Phi$ . I found that the element of length in cartesian coordinates is $ds = \sqrt{dx^2 + dy^2}$ , so with
$$\int ds = \int \frac{ds}{dt} dt = \int \sqrt{\left(\frac{dx}{dt}\right)^2 + \left(\frac{dx}{dt}\right)^2} dt,$$
We find that $\Phi = \int_{t_0}^{t_1} ||\dot{\gamma}(t)|| dt$ .
Now, my plan is finding the element of length in polar coordinates, and plugging in the respective expressions in terms of $\gamma$ . The problem is that I don't see how to find the element of length in polar coordinates. I looked it up on Wikipedia, and found $ds^2 = dr^2 + r^2 d\theta^2$ . Now, for $dr^2$ I would plug in $||\dot{\gamma} (t)||^2$ , for $r^2$ I'd set $||\gamma (t)||^2$ , and for $d\theta$ I have no idea.
Can you help me, especially with the derivation of the polar line element and the form of $\Phi$ in polar coordinates?
|
Set $x = r \cos \theta$, $y = r \sin \theta$. Taking the total differentials, $$\mbox{d}x = \mbox{d}r \cos \theta - r \sin \theta \mbox{d} \theta,$$
$$\mbox{d}y = \mbox{d}r \sin \theta + r \cos \theta \mbox{d} \theta.$$
Squaring and simplifyng
$$\mbox{d}s^2 = {\mbox{d}x}^2 + {\mbox{d}y}^2 = {\mbox{d}r}^2 + r^2 {\mbox{d}\theta}^2.$$
Hence $$\frac{\mbox{d}s}{\mbox{d} t} \mbox{d} t = \sqrt{ \left( \frac{\mbox{d}r}{\mbox{d}t}\right)^2 + r^2 \left( \frac{\mbox{d}\theta}{\mbox{d}t}\right)^2 }\, \mbox{d}t.$$
Now, the property of being extremal is a characteristic of the curve, not of the coordinate system, so it is independent on the local chart you choose. In particular, the Euler-Lagrange equation retains the same form in both systems (obviously, one changes the labels: $(x,y) \to (r,\theta)$). This remarks answer to point $(a)$ and $(b)$. Point $(c)$ is a simple verification you can do eventually after have inverted the previous relations between $(x,y)$ and $(r,\theta)$.
For a brilliant discussion of this and more subtle points, let's see Arnold, Mathematical methods of classical mechanics , Paragraph 12.C, 12.D.
|
For the polar coordinates expression. simply "divide" the line element in polar coordinates by $dt^2$ to obtain
\begin{align}
\left(\frac{ds}{dt}\right)^2 = \left(\frac{dr}{dt}\right)^2 + r^2\left(\frac{d\theta}{dt}\right)^2
\end{align}
so in polar coordinates, one has
\begin{align}
\|\dot\gamma(t)\| = \sqrt{\dot r^2 + r^2\dot\theta^2}
\end{align}
and I'll leave the rest to you.
Note. The more rigorous way to do this is to note that in any given coordinates, the Euclidean metric can be written as a $3\times 3$ matrix with elements $g_{ij}$ . The speed is then given by the following expression in terms of the metric components in these coordinates:
\begin{align}
\|\dot \gamma(t)\| = \sqrt{g_{ij}(\gamma(t))\dot x^i(t)\dot x^j(t)}
\end{align}
where we have written the curve in components in the given coordinates as $\gamma(t) = (x^i(t))$ . The manipulations with the line element performed above are equivalent to this. I'll leave it to you to show that the expression given in terms of the metric components gives the same result for the speed.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I studied the Higgs mechanism a couple of times now and one question that always comes to my mind is the imaginary part of the mass in the Higgs potential.
The Higgs potential can be written as $$V = -\mu^2 \lvert\phi\rvert^2 + \lambda \lvert \phi \rvert^4$$ where the $\mu$ term is identified as mass term. Plugged in in the Lagrangian ${\cal L}=\ldots -V$ , one can obtain spontaneous symmetry breaking for $\mu^2<0$ . My question now is, how should I interpret a imaginary mass term in a physical way?
|
The equation states that the transverse applied force at one point in the string is equal to the transverse force at that point expressed in terms of the string tension. It is not stating that there is no net force on a portion of string. If it did say the latter, then it really would contradict the string being able to accelerate.
|
After thinking for some time on this, I conclude that my reasoning is wrong : I cannot treat the end of the string as a particle with a mass with a finite force exerted on it.
Since it's an infinitesimal element with mass $\rho dx$ , any non vanishing finite force should cause an infinite acceleration. So we must ensure that finite force are vanishing.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I work with a unbalanced data set (it is about people who actually bought stuff):
Bought stuff: Yes ~ 3%
Bought stuff: NO ~97%
The most important task for my machine learning model, is to optimize the sensitivity (I want to "catch" all the "Yes" people, the 3%).
But I was wondering how I could define the baseline. I read this article ( https://machinelearningmastery.com/how-to-get-baseline-results-and-why-they-matter/ ) where is written: "Classification: select the class that has the most observarions and use that class as the result for all predictions".
But, because sensitivity is the most important, can I say that my baseline is 3% (the Yes class, because when you randomly guess.. you will guess statistically 3 people as buyers from the 100).
|
The most important task for my machine learning model, is to optimize
the sensitivity (I want to "catch" all the "Yes" people, the 3%).
Taking this sentence literally, the baseline (guess "Yes" for everybody) is the best possible method - this will get 100% sensitivity and there is no way to improve. Obviously you want to also get good specificity without compromising sensitivity but how much of a compromise is still worth it? (e.g. will you be willing to reduce sensitivity to 95% to get 100% specificity?) There is no single good answer, it really depends on your case and sensitivity alone is impossible to interpret.
Your question IMHO illustrates a wider problem with thinking in terms of sensitivity and specificity. I would suggest that you define a cost function - what is the cost/utility of true positives, true negatives, false positives and false negatives (those all can have dramatically different costs!) and then try to find a classifier that minimizes expected cost (maximizes expected utility).
You can then compare which of the baseline classifiers (in your case just giving the same answer for all inputs) has lower expected cost and use this one.
Frank Harrell has some more thoughts on this topic: http://www.fharrell.com/post/classification/
|
Generally, to build a baseline, I take a look what my data is - specially the fraction of classes, how many classes there are and finally how many features. In your case, saying "Yes" to everything increases the sensitivity but I am assuming that's not what you want to do, for obvious reasons.
I will share my rule of thumb for building baseline in your case (rather what I know about your case):
Its a binary classification problem: First thing that clicks me is SVMs -
as it has been said by others - SVMs gives you an optimal solution
whereas most other approaches will give you a good enough solution.
In machine learning terms SVMs have low variance.
Since, your classes are way imbalanced, I would suggest using appropriate class weights - for SVMs or whatever classifiers you choose.
I am assuming you have a lot of features. If you don't, ignore this step. If you do, reduce these features by random forest feature ranking or some other supervised feature reduction techniques. I would advise PCA since its unsupervised and you may end-up getting undesirable results because high class imbalance.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
While my research about the untreated surface of moulded cast I read that the properties such as tensile strength, 0,2% proof stress and so on are different (much smaller) from the properties you can find in the tables. That’s because the samples have a treated surface to evaluate the material properties.
Now my question is how can I determine the thickness of the surface where my properties are much smaller?
The background is that I have a bore near the untreated surface loaded with hydraulic pressure and for safety I need a specific thickness of material around the bore. And because the material properties of the untreated surface are much smaller, I will not take this area into account to my calculations. If it is needed: the regarded cast is GJS 400.
Many thanks in advance!
|
As briefly mentioned above, you need to either connect the caliper to one of the software "masters" offered by Sylvac, or use the caliper in HId mode .
Select your model from this page: Sylvac Hand Tools .
Download the appropriate manual ; the other answers here do not refer to your appropriate model manual.
Enable the HId mode through a sequence that might look like this below. Please note that some calipers don't have HId mode.
|
Not sure if this is the sole cause for this (no.Data) error but. . .
Sitting the Bluetooth profile to (Hld) - (Virtual Keyboard) seems to have corrected this issue. Measurement data arrives (in the app that has focus / cursor) as if this caliper were a keyboard attached to the PC. Very smooth and nice with a carriage return - linefeed appended.
|
HuggingFaceH4/pmp-stack-exchangedata/engineering.stackexchange.com
|
In three dimensions, the Dirac delta function $\delta^3 (\textbf{r}) = \delta(x) \delta(y) \delta(z)$ is defined by the volume integral:
$$\int_{\text{all space}} \delta^3 (\textbf{r}) \, dV = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \delta(x) \delta(y) \delta(z) \, dx \, dy \, dz = 1$$
where
$$\delta(x) = 0 \text{ if } x \neq 0$$
and
$$\delta(x) = \infty \text{ if } x = 0$$
and similarly for $\delta(y)$ and $\delta(z)$ .
Does this mean that $\delta^3 (\textbf{r})$ has dimensions of reciprocal volume?
As an example, a textbook that I am reading states:
For a collection of $N$ point charges we can define a charge density
$$\rho(\textbf{r}) = \sum_{i=1}^N q_i \delta(\textbf{r} - \textbf{r}_i)$$
where $\textbf{r}_i$ and $q_i$ are the position and charge of particle $i$ , respectively.
Typically, I would think of charge density as having units of charge per volume in three dimensions: $(\text{volume})^{-1}$ . For example, I would think that units of $\frac{\text{C}}{\text{m}^3}$ might be possible SI units of charge density. If my assumption is true, then $\delta^3 (\textbf{r})$ must have units of $(\text{volume})^{-1}$ , like $\text{m}^{-3}$ for example. Is this correct?
|
Yes. The Dirac delta always has the inverse dimension of its argument. You can read this from its definition, your first equation. So in one dimension $\delta(x)$ has dimensions of inverse length, in three spatial dimensions $\delta^{(3)}(\vec x)$ (sometimes simply written $\delta(\vec x)$) has dimension of inverse volume, and in $n$ dimensions of momentum $\delta^{(n)}(\vec p)$ has dimensions of inverse momentum to the power of $n$.
|
Let $x$ be dimensionless and Using the property $\delta (ax)=\frac{1}{|a|}\delta (x)$ we see that indeed the dimension of a Dirac delta is the dimension of the inverse of its argument.
One reoccurring example is eg $\delta(p'-p)$ where $p$ denotes momentum, this delta has dimension of inverse mass in natural units.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I've been trying to figure out how to calculate the latitude of my location with just the shadow of a stick cast by the sun. I am doing this under the assumption that I am stranded in some place with no almanac or any other data – not even day or month.
I want to find the latitude roughly. But I have problems picturing the globe along with its tilted axis and the angles formed by the shadow.
Could you help me understanding the concept and give me a definitive picture of angles that can be calculated. A perspective like a side-view of the earth and sun (orbiting plane – perpendicular to the screen) would help me understand much better.
Here is my efforts to understand so far:
I am not an expert nor have a good background in maths (my maths teacher ruined my life). So please explain to me in simple words.
|
I've been trying to figure out how to calculate the latitude of my
location with just the shadow of a stick cast by the sun. I am doing
this under the assumption that I am stranded in some place with no
almanac or any other data. (not even day or month).
You really cannot find your latitude with only the sun and sticks at your disposal and no declination data or time data. Maybe if you could record your shadow over a period of a day, you could tell if you are in the northern or southern hemisphere by observing which way the shadow moves. If it moves clockwise, you are in northern hemisphere.
And if you could repeat and observe for 365 days (from the day you are stranded, assuming you are very accurately measuring the angles) you could find if you are within the tropic of cancer or above it, as you might not have the sun right on top of your stick over the year if you are above the 23.5 N parallel.
And that's as far as you can get. To determine the precise location, you will need the sun's declination at any given time. May be you will get lucky, if the day you are measuring the angle and the equinox matches, but you wouldn't know it.
Could you help me understanding the concept and give me a definitive
picture of angles that can be calculated.
A picture is worth thousand words. And an interactive demonstration ... It's worth at least a hundred 2D diagrams.
Additional Interactive media - http://astro.unl.edu/naap/motion1/animations/seasons_ecliptic.html
|
Assuming you aren't north of the Arctic Circle or south of the Antarctic Circle, you can determine your latitude my making observations throughout the course of a day, and over the course of a year. You'll need
A rather straight stick,
A fairly flat piece of ground,
A plumb bob (which you can make out of string and a rock),
Some small pebbles to mark the tip of the shadow of the stick over the course of a year at solar noon, and
A trigonometry table or a calculator.
Use your plumb bob to ensure your stick is as close to vertical as you can make it. You'll want to place the stick in exactly the same place every day.
The first thing you'll want to find is the north-south line that passes through the base of the stick. To do this, place a pebble at the tip of the stick's shadow when the shadow is at its shortest. If you do this perfectly, you'll have the north-south line on day number one. You almost certainly won't do this perfectly, so you'll need to repeat this for a few days. Because the Sun rises more or less in the east and sets in the west, you also know which way is north and which is south. You have a compass.
You'll also have a very rough idea of your latitude. If the Sun sets to the left of the north-south line, you know you are somewhere north of 23.44 south latitude. If it sets to the right, you are south south of 23.44 north latitude.
If you want a better idea of your latitude, keep doing this until the day you see the Sun rise exactly in the East. This happens twice a year, typically on March 20 and September 23. Use a string to measure the length of the stick's shadow when the tip of the shadow crosses the north-south line. You do not need a ruler; all you need to know is the ratio of the shadow's length to the stick's length. Now it's a matter of trigonometry:
$$\phi = \arctan\left(\frac {\text{shadow length}}{\text{stick length}}\right)$$
|
HuggingFaceH4/pmp-stack-exchangedata/astronomy.stackexchange.com
|
In trying to compute a discrete probability of some event $E$, call it, $P(E)$, one typically takes $P(E) = n(E) / n(S)$, where $n(E)$ is the number in the event, and $n(S)$ is the number in the sample space. (I may be a bit loose with terminology here, but hopefully, everyone will understand what I'm trying to express!).
Anyhow, my question is how does $P(E) = n(E) / n(S)$ generalize to the case of a conditional probability, something like $P(E|F)$?
Thanks!
|
Interpretation of deep models is still challenging.
Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challenging to understand.
Even in the case of CNNs which have obvious "feature detector" structures, such as edges and orientation of pixel patches, it's not completely obvious how these lower-level features are aggregated upwards, or what, precisely, is going on when these vision features are aggregated in a fully-connected layer.
Adversarial examples show how interpretation of the network is difficult. An adversarial example has some tiny modification made to it, but results in a dramatic shift in the decision made by the model. In the context of image classification, a tiny amount of noise added to an image can change an image of a lizard to have a highly confident classification as another animal, like a (species of) dog.
This is related to interpretability in the sense that there is a strong, unpredictable relationship between the (small) amount of noise and the (large) shift in the classification decision. Thinking about how these networks operate, it makes some sense: computations at previous layers are propagated forward, so that a number of errors -- small, unimportant errors to a human -- are magnified and accumulate as more and more computations are performed using the "corrupted" inputs.
On the other hand, the existence of adversarial examples shows that the interpretation of any node as a particular feature or class is difficult, since the fact that the node is activated might have little to do with the actual content of the original image, and that this relationship is not really predictable in terms of the original image. But in the example images below, no humans are deceived about the content of the images: you wouldn't confuse the flag pole for a dog. How can we interpret these decisions, either in aggregate (a small noise pattern "transmutes" a lizard into dog, or a flagpole into a dog) or in smaller pieces (that several feature detectors are more sensitive to the noise pattern than the actual image content)?
HAAM is a promising new method to generate adversarial images using harmonic functions. ("Harmonic Adversarial Attack Method" Wen Heng, Shuchang Zhou, Tingting Jiang.) Images generated using this method can be used to emulate lighting/shadow effects and are generally even more challenging for humans to detect as having been altered.
As an example, see this image, taken from " Universal adversarial perturbations ", by
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. I chose this image just because it was one of the first adversarial images I came across. This image establishes that a particular noise pattern has a strange effect on the image classification decision, specifically that you can make a small modification to an input image and make the classifier think the result is a dog. Note that the underlying, original image is still obvious: in all cases, a human would not be confused into thinking that any of the non-dog images are dogs.
Here's a second example from a more canonical paper, " EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES " by Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy. The added noise is completely indistinguishable in the resulting image, yet the result is very confidently classified as the wrong result, a gibbon instead of a panda. In this case, at least, there is at least a passing similarity between the two classes, since gibbons and pandas are at least somewhat biologically and aesthetically similar in the broadest sense.
This third example is taken from " Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch " by João Monteiro, Zahid Akhtar and Tiago H. Falk. It establishes that the noise pattern can be indistinguishable to a human yet still confuse the classifier.
For reference, a mudpuppy is a dark-colored animal with four limbs and a tail, so it does not really have much resemblance to a goldfish.
I just found this paper today. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus. " Intriguing properties of neural networks ". The abstract includes this intriguing quotation:
First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
So, rather than having 'feature detectors' at the higher levels, the nodes merely represent coordinates in a feature space which the network uses to model the data.
|
The subject of my Ph.D dissertation was to reveal the black-box properties of neural networks, specifically feed-forward neural networks, with one or two hidden layers.
I will take up the challenge to explain to everyone what the weights and bias terms mean, in a one-layer feed-forward neural network. Two different perspectives will be addressed: a parametric one and a probabilistic one.
In the following, I assume that the input values provided to each input neuron have all been normalized to the interval (0,1), by linear scaling ($x_{input}=\alpha \cdot x + \beta$), where the two coefficients $\alpha$ and $\beta$ are chosen per input variable, such that $x_{input} \in (0,1)$. I make a distinction between real-numbered variables, and enumerated variables (with a boolean variable as a special case enumerated variable):
A real-numbered variable is provided as a decimal number between $0$ and $1$, after linear scaling.
An enumerated variable, take the days of the week (monday, tuesday, etc.) are represented by $v$ input nodes, with $v$, being the number of enurable outcomes, i.e. $7$ for the number of days in a week.
Such a representation of your input data is required in order to be able to interpret the (absolute value) size of the weights in the input layer.
Parametric meaning:
the larger the absolute value of the weight is between an input
neuron and a hidden neuron, the more important that variable is, for
the 'fireing' of that particular hidden node. Weights close to $0$
indicate that an input value is as good as irelevant.
the weight from a hidden node to an output node indicates that the weighted
amplification of the input variables that are in absolute sense most
amplified by that hidden neuron, that they promote or dampen the
particular output node. The sign of the weight indicates promotion
(positive) or inhibition (negative).
the third part not explicitly represented in the parameters of the neural network is the multivariate distribution of the input variables. That is, how
often does it occur that the value $1$ is provided to input node $3$ - with the really large
weight to hidden node $2$ ?
a bias term is just a translation constant that shifts the average of a hidden (or output) neuron. It acts like the shift $\beta$, presented above.
Reasoning back from an output neuron : which hidden neurons have the highest absolute weight values, on their connections to the output neurons? How often does the activation of each hidden node become close to $1$ (assuming sigmoid activation functions). I'm talking about frequencies, measured over the training set. To be precise: what is the frequency with which the hidden nodes $i$ and $l$, with large weights to the input variables $t$ and $s$, that these hidden nodes $i$ and $l$ are close to $1$? Each hidden node propagates a weighted average of its input values, by definition. Which input variables does each hidden node primarily promote - or inhibit? Also the $\Delta_{j,k}=\mid w_{i,j} - w_{i,k}\mid$ explains much, the absolute difference in weights between the weights that fan out from hidden node $i$ to the two output nodes $j$ and $k$.
The more important hidden nodes are for an output node (talking in frequencies, over the training set), which 'input weights times input frequencies' are most important? Then we close in on the significance of the parameters of feed-forward neural networks.
Probabilistic interpretation:
The probabilistic perspective means to regard a classification neural network as a Bayes classifier (the optimal classifier, with the theoretically defined lowest error-rate). Which input variables have influence on the outcome of the neural network - and how often? Regard this as a probabilistic sensitivithy analysis.
How often can varying one input variable lead to a different classification? How often does input neuron $x_{input}$ have potential influence on which classification outcome becomes the most likely, implying that the corresponding output neuron achieves the highest value?
Individual case - pattern
When varying a real-numbered input neuron $x_{input}$ can cause the most likely classification to change, we say that this variable has potential influence . When varying the outcome of an enumerated variable (changing weekday from monday $[1,0,0,0,0,0,0]$ to tuesday $[0,1,0,0,0,0,0]$, or any other weekday), and the most likely outcome changes, then that enumerated variable has potential influence on the outcome of the classification.
When we now take the likelihood of that change into account, then we talk out expected influence . What is the probability of observing a changing input variable $x_{input}$ such that a the input case changes outcome, given the values of all the other inputs ? Expected influence refers to expected value , of $x_{input}$, namely $E(x_{input} \mid {\bf x}_{-input})$. Here ${\bf x}_{-input}$ is the vector of all input values, except from input $x_{input}$. Keep in mind that an enumerated variable is represented by a number of input neurons. These possible outcomes are here regarded as one variable.
Deep leaning - and the meaning of the NN parameters
When applied to computer vision, neural networks have shown remarkable progress in the last decade. The convolutional neural networks introduced by LeCunn in 1989 have turned out to eventually perform really well in terms of image recognition. It has been reported that they can outperform most other computer-based recognition approaches.
Interesting emergent properties appear when convolutional neural networks are being trained for object recognition. The first layer of hidden nodes represents low-level feature detectors, similar to the scale-space operators T. Lindeberg, Feature Detection with Automatic Scale Selection, 1998 . These scale-space operators detect
lines,
corners,
T-junctions
and some other basic image features.
Even more interesting is the fact that perceptual neurons in mammal brains have been shown to resemble this way of working in the first steps of (biological) image processing. So with CNNs, the scientific community is closing in on what makes human perception so phenomenal. This makes it very worthwhile to pursue this line of research further.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
I came up with a question stated below
" A vessel contains oil (density 0.8g/cc) over mercury (density 13.6g/cc). A homogeneous sphere floats with half its volume immersed in mercury and the other half in oil. The density of the material of the sphere in g/cc is ? "
The answer to this problem is 7.2 g/cc but this seems incorrect to me. I got 6.4 g/cc
If we consider the oil section only then, the massive force at the surface of lower part of the hemisphere is cancelling it out (Think in 3d), and small force which act on upper surface has vertical component (downwards) as horizontal component are cancelled out, there is a net downward force on sphere by oil (Buoyant force).
Sphere is homogenous, oil and mercury has constant density so dont assume out of the box
Regards
Edit :-
As Suggested by BIO's answer let put a small negligible rod in between two hemisphere under consideration then due to that flat surface there is a greater force on that flat surface which compensate other small force acting on curved surface to lead for an upward net force by oil, but in reality we cant assume that, as that flat surface isnt exposed to oil section ( but only curved surface )
Excuse for that drawing
Still cant figure it out help
|
Archimedes' principle states that the buoyant force equals the weight of the fluid displaced. This gives the solution of 7.2 g/cc. If you're not convinced, consider the following set-up
The two hemispheres are only separated slightly but joined rigidly by a short, thin rod so that the pressures at the two flat surfaces are infinitesimally different. You should feel reassured to use Archimedes' principle now.
Regarding your original question, yes buoyant force can act downwards (as you have drawn in your diagram). The buoyant force is due to the liquid pushing the surface of the object. As pressure is higher at the bottom of the liquid due to gravity, usually the net buoyant force is pointing upwards. (A suction cup is a counterexample) However, if we just consider a part of the object's surface, the buoyant force acting on that patch is normal to the surface: $d\vec{F}_\textrm{buoy} = -p d\vec{A}$ .
Your above reasoning suggests that the density of the sphere is equal to half the difference of the density of the two media. What happens when both fluids are the same?
Lastly, you can always calculate the force by direct integration, if you are familiar with vector calculus. It would be good to use spherical coordinates for this problem. Proof of Archimedes' principle uses the Gauss' theorem and can be found in this answer .
|
The buoyant force doesn't act downwards. There is actually a buoyant force acting upwards on the top 1/2 of the sphere. That's why the sphere has to have a higher density (7.2g/cc) than you calculate (6.4g/cc).
If the oil were replaced with air, the sphere would actually sink lower into the mercury. But the buoyant force of the displaced oil allows it to rise a bit. If the oil were denser, it would rise even more. If the density of the oil was 7.2g/cc, it would rise completely into the oil and have neutral buoyancy.
Imagine you have two separate hemispheres, one with the same density as the oil and one with the same density as the mercury. If you released them anywhere, they would migrate to the fluid with the same density. If the hemisphere that had the same density of mercury was in the oil, it would sink into the mercury, but being the same density as the mercury it would neither rise nor fall due to it's density in the mercury, it would have neutral buoyancy . Likewise if the hemisphere that had the same density as oil were in the mercury it would float up into the oil where it would have neutral buoyancy.
If we glued the two hemispheres together, the sphere would float/sink to the boundary between the oil and mercury with the 1/2 that has the density of oil on top in the oil and the 1/2 with the density of mercury on bottom in the mercury. If the sphere had a volume of 2cc, it would have a mass of 14.4g.
For a homogenous sphere to maintain the same position, it would have to have the same mass, 14.4g/2cc or 7.2g/cc. One difference from the glued hemispheres would be that it could rotate freely.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Does every nerve ending send information to the brain separately? Is there a nerve path (I don't know their scientific name) from every nerve ending to the brain; or are they sent to brain from the same paths in the dorsal root ganglion? If not, how can we determine the (almost) exact location of pain in our hand?
I am not very familiar with the biology except the lessons I had taken in the high school. So please try to use daily language explaining this.
|
Yes. Although utilizing the action potential is not in their function, Schwann cells do have Na/K ATPases. In fact all animal cells do. It contributes to the resting membrane potential in neural networks, with regards to Schwann cells, and prevent differences in osmotic pressure from disrupting the cells.
As for your second question, action potentials do not occur in Schwann cells as there is nowhere for this "impulse" to travel to. A localized depolarization is not an action potential. Papers such as this , and this suggest that voltage-gated ion channels in Schwann cells serve complex and specific purposes such as inducing myelin formation.
|
Sorry put simply yes. Schwann cell, also called neurilemma cell, any of the cells in the peripheral nervous system that produce the myelin sheath around neuronal axons. Schwann cells are named after German physiologist Theodor Schwann, who discovered them in the 19th century. As such of it function an action potential is needed.
The principal primary active transport system in neurons, as in most other animal cells, is a P-type pump that concurrently extrudes Na+ and accumulates K+. For brevity, we will refer to it as the Na,K-ATPase. Depending on their functions, different tissues have vastly different requirements for pumping Na+ and K+. Transport by Na,K-ATPases is specifically inhibited by cardiac glycosides, such as ouabain.
http://study.com/academy/lesson/the-myelin-sheath-schwann-cells-nodes-of-ranvier.html
enter link description here
|
HuggingFaceH4/pmp-stack-exchangedata/biology.stackexchange.com
|
I want find statistical support that dependence is inversely proportional to power. To do so, I have
~260 cases,
with four questions about dependence, and
one question about power
The questions about dependence are on a continuous scale, whereas the question about power only allows three ordered answers (I am powerful, equilibrium, the other one is powerful).
To support the inverse proportionality of power and dependence (in this application), is the right way to do a ordinal logistic regression?
I have plotted grouped error bars for the data ( error.bars.by() ) and they show the case I want to prove quite clearly; however I suppose I need the right figures on top of that as well.
Thanks for any suggestions and advice.
I had a look at some threads on Cross validated ( 1 , 2 , 3 ) about similar questions, and as far as I understand there is not a clear answer to how the above described mutual inverse correlation/proportionality could be tested. Correct me if I'm wrong.
Thanks for answers and comments so far. As far as I understand ordinal logistic regression helps me to find support for a relation of power with dependence, if I use dependence as predictor and power as dependent variable.
|
Don't apologize for this.
Think of it graphically, via terms of a Venn diagramm:
http://i.imgur.com/Yzr1TKM.gif
This doesn't have to do with independence, by the way.
Look at the intersection $P(A∩B)$, right? Now, look at the union $P(B∪C)$! It's obviously simply the parts of $B$ and $C$ put together. Now we look at the intersection of those two areas. Since $P(A∩B)$ is completely contained in $P(B∪C)$, the intersection IS $P(A∩B)$.
Or, look at the diagram: We have the part labeled $P(A∩B)$, of course, then we have the part labeled $P(A∩B∩C)$, but everything else is not contained in the original $P(A∩B)$ and therefore not part of the intersection.
There you have it: One is contained in the other.
|
An alternative way to see it is to use the distributive law
$$
X \cap (Y \cup Z) = (X \cap Y) \cup (X \cap Z),
$$
with $X = (A \cap B)$, $Y = B$, and $Z = C$. So, we have
\begin{align}
(A \cap B) \cap (B \cup C)
&= ((A \cap B) \cap B) \cup ((A \cap B) \cap C) \\
&= (A \cap B) \cup (A \cap B \cap C) \\
&= (A \cap B).
\end{align}
So, ${\rm Pr}\{(A \cap B) \cap (B \cup C)\} = {\rm Pr}\{A \cap B\}$.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
Hey everyone!
Lately I remembered an exercise from an algebra class from Jacobson's book: Prove that if an element has more than one right inverse then it has infinitely many, Jacobson attributes this excercise to Kaplansky. Regardless of the solution I began to wonder:
Does anybody know any explicit examples of rings that have this property of having elements with infinitely many (or, thanks to Kaplansky, multiple) right inverses? Is the same true for left inverses?
I came across an article from the AMS Bulletin that studied this topic but skimming through it I could not find an explicit example, sorry I cant remember the author.
Anyways, thanks and good luck!
|
Let $M$ be a module (over some ring) such that $M$ is isomorphic to $M\oplus M$, for example an infinite-dimensional vector space over a field. Let $R$ be the ring of endomorphisms of $M$. Let $f\in R$ be projection of $M\oplus M$ on the first factor composed with an isomorphism $M\to M\oplus M$. Then $f$ has as many right inverses as there are homomorphisms $M\to M$.
|
Consider the space $\mathbb{Z}^\mathbb{N}$ of integer sequences $(n_0,n_1,\ldots)$, and take $R$ to be its ring of endomorphisms. Then the ``left shift'' operator
$$(n_0,n_1,\ldots) \mapsto (n_1,n_2,\ldots)$$
has plenty of right inverses: a right shift, with anything you want dropped in as the first co-ordinate, gives a right inverse.
I recall finding this example quite helpful with the exercise ``two right inverses implies infinitely many'' — taking a couple of the most obvious right inverses in this case, and seeing how one can generate others from them.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
We can store cold (ice), heat (i.e. hot water bag) and electrical charge (batteries). We can even "store" a magnetic field in a magnet. We can convert light into energy and then, if we want, back to light. But we can't store light in form of light in significant amounts. What is the explanation of that in physics terms?
|
For the photons that make up light to exist they have to be travelling at the speed of light. This means that to store them you have to put them in a container where they can move around at the speed of light until you want to let them out.
You could build the container out of mirrors, but no mirror we can currently build is 100% reflective, or indeed can be ever 100% reflective. Usually when a photon "hits" the mirror it is absorbed by one of the atoms in the mirror and then re-emitted back out into the container. However, occasionally the photon either won't get re-emitted (leaving the atom in an excited state) or it doesn't hit one of the atoms and makes it way through the mirror and out of the container.
While the chances of this happening for an individual photon are low, there are lots of photons travelling very fast so it happens many times thus causing the light to "leak" or decay.
Building a near perfect mirror is hard, so it's easier to convert the light into something that can be stored and then convert that back into light when you need it.
|
The answer by John Rennie and subsequent comments reminded me of this TEDtalk about energy storage from light.
I don't know the details, but this is what I understand they did: they've studied the electronic and absorption properties of foils made of nanotubes, in particular when combined with the result of some impressive research on infrared imagery. The combined product gains the extraordinary property that it can absorb light and store the energy for longer periods of time and in a cleaner way than batteries (our main and perhaps only real method for energy storage). This energy could be free (because you could just attach these flexible foils to your window for example) and it could even be shared through the coherent re-emission of light (from your window to your neighbours window for example).
It's not storing light in the form of light - what the question asked for - but I think it's as close as we're able to get to storing sunlight for a semi-long period of time efficiently and conveniently, something photovoltaic cells are still struggling with. I imagine Justin Hall-Tipping is making it sound more advanced than it is at this time, but nonetheless it has some great potential and I think it's definitely useful to mention it here. I reiterate that I'm unaware of the details and am not an expert in this field though.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Why bother with p-values, R squared, etc. ... Model size is not a factor with the computing power available now so why not just run multiple iterations of all possible sets of input variables and see which one has lowest cross-validation error?.
|
A good reason not to do this is that the cross-validation estimator has a finite variance, so if you evaluate it on many choices of input variables you will end up with a set that explains the data you have well, but will generalise poorly as it has effectively learned the noise that is particular to that dataset. The more choices you investigate, the worse the problem gets. Often you end up with a worse predictor than a regularised model, with all the features, such as ridge regression. So if you are interested in predictive performance, don't perform feature selection at all, instead use regularisation. This is the advice given in Millar's monograph on subset selection in regression, and in my experience, he is right.
|
I would personally favor cross-validated score evaluation because:
it is easily interpretable by the analyst provided that the underlying score function (accuracy, f1-score, RMSE...) is interpretable too,
it gives an idea of the uncertainty by looking at the stdev of the score values across CV folds,
it gives a way to decompose the error into bias (error measured on train folds) and variance (difference of errors measured on train and test folds).
Model size is not a factor with the computing power
This is not always true: deep learning machine learning models for instance have a model size that is often limited by the hardware (typically the amount of RAM on the GPU card).
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
In this course lecture; section 5.1 , single-source shortest path (SSSP) is formulated as the following linear program (LP):
\begin{align}
\max &\sum d_u \\
\text{subject to} & \\
d_v &\le d_u + l_{uv} \quad \forall (u,v) \in E \\
d_s &= 0
\end{align}
The comment on the objective function is as follows (emphasis added):
The variables $d_u$ represent the distances from $s$ to each vertex $u$. Maximizing the sum of the $d_u$ is done by maximizing each one individually, since increasing any single $d_u$ never forces us to decrease some other $d_v$ .
I can get its basic idea. However, how to argue that $(\max d_u \;\forall u \in V)$ is equivalent to $(\max \sum d_u)$ more rigorously? Specifically, why is that "increasing any single $d_u$ never forces us to decrease some other $d_v$"?
|
Any optimal solution to the problem must satisfy
$$
d_v = \min_{u\colon (u,v) \in E} (d_u + \ell_{uv}),
$$
as well as $d_s = 0$, of course. Assuming the graph is connected, you can prove by induction on the length (number of edges) of a shortest path from $s$ to $v$ that $d_v$ is at most the distance from $s$ to $v$, which we denote by $d^*_v$. In particular, the optimal value is at most $\sum_v d^*_v$.
On the other hand, it is not hard to check that $d_v = d^*_v$ (for all $v$) itself is a feasible solution, showing that the optimal value is exactly $\sum_v d^*_v$, and it is achieved only when $d_v = d^*_v$ for all $v$.
|
The claim that "increasing any single $d_u$ never forces us to decrease some other $d_v$ " can be seen from the constraint $d_v \leq d_u + l_{uv}$ . Here, increasing $d_u$ will not cause a violation in this particular constraint, while increasing $d_v$ would require $d_u$ to increase (not decrease) in order to keep the constraint true.
Now for the "maximizing each one individually" part, recall that the single source single destination shortest paths problem is solved by having the above linear program maximizing a single $d_u$ instead of the sum. From the claim in the paragraph above, we see that maximizing the sum results in a solution for which it is impossible to improve any individual $d_u$ (if it were, then since we need not decrease anything, it would be guaranteed to contradict our initial assumption of having already maximized the sum).
Hence maximizing the sum gives a solution for which each individual $d_u$ has the same value as if it were individually maximized in the single-pair case, and thus maximizing the sum optimally solves the single-source all destination problem.
|
HuggingFaceH4/pmp-stack-exchangedata/cs.stackexchange.com
|
Is it true that for two random variables $A$ and $B$,
$$E(A\mid B)=E(B\mid A)\frac{E(A)}{E(B)}?$$
|
$$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$
The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means.
If $E[B]=0$, then the right side of $(1)$ involves a division by $0$ and so $(1)$ is meaningless. Note that whether or not $A$ and $B$ are independent is not relevant.
In general , $(1)$ does not hold for dependent random variables but specific examples of dependent $A$ and $B$ satisfying $(1)$ can be found. Note that we must continue to insist that $E[B]\neq 0$, else the right side of $(1)$ is meaningless. Bear in mind that $E[A\mid B]$ is a random variable that happens to be a function of the random variable $B$, say $g(B)$ while $E[B\mid A]$ is a random variable that is a function of the random variable $A$, say $h(A)$. So, $(1)$ is similar to asking whether
$$g(B)\stackrel{?}= h(A)\frac{E[A]}{E[B]} \tag 2$$
can be a true statement, and obviously the answer is that $g(B)$ cannot be a multiple of $h(A)$ in general.
To my knowledge, there are only two special cases where $(1)$ can hold.
As noted above, for independent random variables $A$ and $B$, $g(B)$ and $h(A)$ are degenerate random variables (called constants by statistically-illiterate folks) that equal $E[A]$ and $E[B]$
respectively, and so if $E[B]\neq 0$, we have equality in $(1)$.
At the other end of the spectrum from independence, suppose that
$A=g(B)$ where $g(\cdot)$ is an invertible function and thus $A=g(B)$ and $B=g^{-1}(A)$ are wholly dependent random variables. In this case,
$$E[A\mid B] = g(B), \quad E[B\mid A] = g^{-1}(A) = g^{-1}(g(B)) = B$$
and so $(1)$ becomes
$$g(B)\stackrel{?}= B\frac{E[A]}{E[B]}$$
which holds exactly when $g(x) = \alpha x$ where $\alpha$ can be
any nonzero real number. Thus, $(1)$ holds whenever $A$ is a scalar multiple of $B$, and of course $E[B]$ must be nonzero
(cf. Michael Hardy's answer ). The above development
shows that $g(x)$ must be a linear function and that
$(1)$ cannot hold for affine functions $g(x) = \alpha x + \beta$ with $\beta \neq 0$. However, note that Alecos Papadopolous in
his answer and his comments thereafter claims that if $B$
is a normal random variable with nonzero mean, then for specific
values of $\alpha$ and $\beta\neq 0$ that he provides,
$A=\alpha B+\beta$
and $B$ satisfy $(1)$. In my opinion, his example is incorrect.
In a comment on this answer, Huber has suggested considering the
symmetric conjectured equality
$$E[A\mid B]E[B] \stackrel{?}=E[B\mid A]E[A]\tag{3}$$
which of course always holds for independent random variables regardless of the values of
$E[A]$ and $E[B]$ and for scalar multiples $A = \alpha B$ also. Of course, more trivially, $(3)$
holds for any zero-mean random variables $A$ and $B$ (independent or dependent, scalar multiple or not; it does not
matter!): $E[A]=E[B]=0$ is sufficient for equality in $(3)$.
Thus, $(3)$ might not be as interesting as $(1)$ as a topic for discussion.
|
The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if the two variables are linear functions of each other and have the same coefficient of variation (the ratio of standard deviation over mean) in absolute terms.
For jointly normals we have
$$\operatorname{E}(A \mid B) = \mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B)$$
and we want to impose
$$\mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B) = \left[\mu_B + \rho \frac{\sigma_B}{\sigma_A}(A - \mu_A)\right]\frac{\mu_A}{\mu_B}$$
$$\implies \mu_A + \rho \frac{\sigma_A}{\sigma_B}(B - \mu_B) = \mu_A + \rho \frac{\sigma_B}{\sigma_A}\frac{\mu_A}{\mu_B}(A - \mu_A)$$
Simplify $\mu_A$ and then $\rho$, and re-arrange to get
$$B = \mu_B +\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}(A - \mu_A)$$
So this is the linear relationship that must hold between the two variables (so they are certainly dependent, with correlation coefficient equal to unity in absolute terms) in order to get the desired equality. What it implies?
First, it must also be satisfied that
$$E(B) \equiv \mu_B = \mu_B+\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}(E(A) - \mu_A) \implies \mu_B = \mu_B$$
so no other restirction is imposed on the mean of $B$ ( or of $A$) except of them being non-zero. Also a relation for the variance must be satisfied,
$$\operatorname{Var}(B) \equiv \sigma^2_B = \left(\frac{\sigma^2_B}{\sigma^2_A}\frac{\mu_A}{\mu_B}\right)^2\operatorname{Var}(A)$$
$$\implies \left(\sigma^2_A\right)^2\sigma^2_B = \left(\sigma^2_B\right)^2\sigma^2_A\left(\frac{\mu_A}{\mu_B}\right)^2$$
$$\implies \left(\frac{\sigma_A}{\mu_A}\right)^2 = \left(\frac{\sigma_B}{\mu_B}\right)^2 \implies (\text{cv}_A)^2 = (\text{cv}_B)^2$$
$$\implies |\text{cv}_A| = |\text{cv}_B|$$
which was to be shown.
Note that equality of the coefficient of variation in absolute terms, allows the variables to have different variances, and also, one to have positive mean and the other negative.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
I have a curve interpolation problem.
I have two closed curves that are defined on an X,Y plane. How can I define a 3rd curve that is the average of those two? Programmatically, I have a list of points for each curve, let's say N1 for curve 1 and N2 for curve 2, where N1 != N2 (most likely).
When I say 'average', initially, I would like the contribution of each curve to the final curve to be identical. Eventually, I would like to be able to weight the contributions of each curve (i.e., have my interpolated curve be 'closer' to one curve than another).
How can I go about doing this?
In a 1D case, I believe that the problem is somewhat easier, like I could solve it using some kind of projection (although my linear algebra is really rusty at this point). Is that intuition somewhat correct, and therefore can be extended to the 2D case?
|
I think the keyword you need to find literature on your problem is morphing .
There is an extensive computer graphics literature on this.
Below is a figure from one paper selected almost at
random:
"Morphing Using Curves and Shape Interpolation Techniques"
Johan, H.; Koiso, Y.; Nishita, T.
in Computer Graphics and Applications , pp. 348 - 454, 2000 .
The 4th figure in the sequence could serve as the average you seek.
Another reference is "Multiresolution Morphing for Planar Curves,"
Hahmann, S. and Bonneau, G.-P. and Caramiaux, B. and Cornillac, M.,
Computing 79 (2007) 197-209 . Using the key search terms, and Google Scholar with these
two references, should bring you to a wealth of relevant literature.
See also this related MO question on the distance between two curves.
|
One way to define an average could start as follows: you first introduce a metric on the space of all curves, i.e. a way of telling the distance between two curves (there should be a natural way to do this for curves in the Euclidean plane). Then you try to find a shortest path in this space connecting the two curves (a geodesic), and call the curve lying half ways the average. Since you are actually considering polygons with a fixed number of vertices, the "space of all curves" is finite dimensional so things might be more computable. I assume this leads to some interesting mathematics but I don't what has been done in this area. Maybe you can find some more information in this article of Younes, Michor, Shah and Mumford.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
It's a classic question with many answers all over the Internet, but none here so I figured I'd ask it:
How fast would the Earth need to spin for a person (or anything for that matter) to feel weightless while on the surface at the equator?
In this situation everything on the Earth's surface would essentially be in orbit around the Earth at the radius of the Earth's surface (let us assume the atmosphere was also spun up to this angular velocity so there would be no air drag slowing things down). Let us also say by "surface of the Earth" we mean mean sea level.
You can decide for yourself if/how to factor in the bulge of the Earth. You can assume that the Earth somehow is able to maintain its present shape while spinning up.
Any comments on whether an Earth spinning slightly faster than this speed will cause it to break apart or not will also be appreciated.
|
How fast would a sphere need to rotate for a dust speck at its equator to achieve balance between gravitational attraction and centrifugal force?
If you do the math (equating $G M m / R^2$ to $m \omega^2 R$ and using $M = \frac{4\pi}{3} \rho R^3$ as well as $\omega = 2\pi f$), it follows that the size of the sphere is entirely irrelevant and that only the density $\rho$ of the sphere enters into the equation for $f$, the number of revolutions per unit time: $$f^2 = \frac{1}{3\pi}G\rho$$
For $\rho = 5.5 \times 10^3$ kilogram per cubic meter (the density of planet earth) it follows that $f=0.197 \times 10^{-3}$ revolutions per second, corresponding to a revolution period of $5070$ seconds (1 hour and 24 minutes).
|
In view of the difference in orders of magnitude of previous answers, let’s see what dimensional analysis can tell us.
We look for a dimensionless combination ${\varpi}$ of $v$, $M$, $G$ and $g$, where $v$ is the velocity, $M$ is the mass of the Earth, $G$ the gravitational constant and $g=9.8m/s^2$ the gravitational acceleration at the surface of the Earth. By elementary manipulations we get that
$$
\varpi=v \left(GM g \right)^{-1/4}
$$
is dimensionless. Thus,
$$
v=R\omega = (GM g)^{1/4}\quad \Rightarrow \quad
\omega =\frac{(GMg)^{1/4}}{R}
$$
with $R$ the radius of the Earth. Plugging numbers we get
$$
\omega\sim 1.2\times 10^{-3} \hbox{sec}^{-1}\, ,
$$
which would agree almost exactly with the answer of @sammy gerbil.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Let $\mu$ be a finite nonatomic measure on a measurable space $(X,\Sigma)$, and for simplicity assume that $\mu(X) = 1$. There is a well-known "intermediate value theorem" of Sierpiński that states that for every $t \in [0,1]$, there exists a set $S \in \Sigma$ with $\mu(S) = t$.
I would like to use the following stronger conclusion for such a measure:
There exists a chain of sets $\{S_t \mid t \in [0,1]\}$ in $\Sigma$,
with $S_t \subseteq S_r$ whenever $0 \leq s \leq r \leq 1$, such that
$\mu(S_t) = t$ for all $t \in [0,1]$.
(One can view this as the existence a right inverse to the map $\mu \colon \Sigma \to [0,1]$ in the category of partially ordered sets.)
This statement appears (albeit hidden within a proof) on the Wikipedia page for " Atom (measure theory) ," and even includes a sketch for the proof! However, I would like to see some mention of this in the literature. I've checked the Wiki references and they both seem to prove the weaker statement. I looked in Fremiln's Measure Theory , vol. 2, and again found the weaker version but not the stronger.
Question: Can anyone provide me with such a reference?
A proof. In case anyone stumbles to this page and wants to see a proof, I'll sketch one that is more constructive than the one that I linked to above. Set $S_0 = \varnothing$ and $S_1 = X$. By Sierpiński, there exists $S_{1/2} \in \Sigma$ of measure $1/2$. For each Dyadic rational $q = m/2^n \in [0,1]$ ($1 \leq m \leq 2^n$), we may proceed by induction on $n$ to construct each $S_q$. Now given $r \in [0,1]$, set $S_r = \bigcup_{q \leq r} S_q$. (This is essentially the same method of proof as the one in the reference provided in Ramiro de la Vega's answer.)
|
I would say this is folklore (I proved it and used it many years ago on my undergrad thesis), but here is a concrete reference:
Such a family of measurable sets is called a $[0,1]$-family in On the Skorokhod representation theorem by Jean Carlos Cortissoz, PAMS, Vol.135, No. 12, 2007 (see Definition 4.1). A proof that such a family exists in any non-atomic space is given in Lemma 4.1.
|
There's a stronger version of that (basic) theorem due to Lyapunov. It is stronger because it concerns vectors of measures, and not only a single measure. It states that given a non-atomic vector measure (a collection of $n$ measures $\mu_1,\ldots, \mu_n$ where each measure is non-atomic) always has an image which is convex (in $\mathbb{R}^n$).
Unfortunately, I could never find a translation of his paper, so I can only link the version in russian . The main statements can be found in French at the end of the paper. There's also a paper of Halmos that proves the result.
Maybe looking at the proof method or subsequent papers you can find the chain statement that you seek.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
I'm working on a project that would benefit from using A.I. or machine learning to analyse news feeds from a variety of websites and grade each article between 0 and 10. We would manually grade hundreds of articles to train the A.I. on what we like and what we don't like using the scoring range. The A.I. is expected to learn how we grade by identifying similarities between articles. When the A.I. starts to grade similar to humans, then we would go more hands of and leave this task to the A.I.
Not sure where to start with A.I. what tools and approaches would be the easiest to achieve this?
|
I come up with multiple advantages for siamese against a single neural network for similarity measuring:
Training Phase. If using a single network to replace Siamese, it might be required a double number of parameters (weights) for learning. Hence, training the network will likely converge slower and the network will be more volatile to noise.
Testing Phase. Note that these similarity measurements are used in the applications like face recognition. Now, suppose we are going to use the model in such a system. If we have implemented the model by the Siamese, we would only need to compute the output of the model for the input once, and then use the cached results for the existing images in the database, and eventually fasly compute the similarity measures. On the other hand , if we have implemented the measurement by a single neural network, we should compute per query the result for all combination of the input and images in the background. Hence, in the latter, we cannot cache the results for the existing data in the database. Therefore, single neural network implementation will have much more intensive query time for massin dataset than Siamese implementation.
|
In addition to @Omg's answer note that Siamese networks are typically used in situations where applying (A,B) to the inputs must generate the same output as applying (B,A) (i.e. the similarity measure of A to B is the same as the similarity of B to A ).
With a network with separate weights, this is not guaranteed. One way to get close to this is to not only use samples (A,B) as training input but also (equally often) (B,A) . Effectively this doubles the number of training steps (and therefore training time) and the network output is still not guaranteed to be symmetric.
By sharing weights, the symmetry of the response of the network ( (A,B) gives the same output as (B,A) ) is guaranteed by design.
|
HuggingFaceH4/pmp-stack-exchangedata/ai.stackexchange.com
|
Pretty much letting my mind free-wheel.
Assume a fleet of air-supported hover-craft were to replace cars/etc on the streets. Assume also that the present traffic-signals/pedestrian rules remain unchanged.
As I understand hover-craft come to a gradual stop; similar to a train.
What would be the equivalent of disc-brakes on hover-craft? i.e. How would you improve braking capability on a hovercraft?
p.s. Friction, Aerodynamics, Inertia hence the post here in the Physics forum but please feel free to vote as OT ...
|
To deal with lift fan failure, you'd need some landing pads anyway. If you design those properly you can use them for braking, too. The biggest problem might be how you'd lose the air cushion rapidly, in case of emergency braking.
Maglev trains use a similar solution as they've got the same problem.
|
A motor in the front position should be adjusted to the same power of the back motor. When the two will be switched on together, the like forces will cancel each other and not allow the craft to move.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I performed a simple ANOVA in R and then generated the following TukeyHSD() comparisons of means:
I have a pretty good idea (I think) of what all this means except the 'p adj'. If I'm correct:
The difference in test scores between say Juniors and Freshmen is 4.86, with Juniors averaging 4.86 points higher.
The 95% confidence interval of that difference is between -12.19 and 21.91 points.
But it's not clear to me what the p adj represents. First of all, adjusted for what? Secondly, is this to be interpreted like any other p-value? So, between juniors and freshmen there is no statistical difference in the means (because the p-value > .05)?
|
p adj is the p-value adjusted for multiple comparisons using the R function TukeyHSD() . For more information on why and how the p-value should be adjusted in those cases, see here and here .
Yes you can interpret this like any other p-value, meaning that none of your comparisons are statistically significant. You can also check ?TukeyHSD and then under Value it says:
A list of class c("multicomp", "TukeyHSD"), with one component for each term requested in which. Each component is a matrix with columns diff giving the difference in the observed means, lwr giving the lower end point of the interval, upr giving the upper end point and p adj giving the p-value after adjustment for the multiple comparisons.
|
The p adj value tells if there is a significant difference between comparisons. To know if there is a statistical difference, first and foremost you have to check when you ran your anova test. If the p-value is greater than 0.05, then there is no need to run post hoc tests such as the Tukey because you already know that there is no significant differences. I am sure that in this example, the p-value was greater than 0.05 for the anova test, which is why when you ran the post hoc Tukey test, no significant differences was observed.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
When a cat or any body falls over to the ground, how is momentum conserved?
I was working on a problem of a cat falling on top of a skateboard, and the system travels together with a new velocity. That seemed intuitive enough for me. This is how I was thinking through:
The cat had momentum that became zero after the impact. Should not the skateboard have recoiled in some way, due to the conservation of momentum? After all, the change in momentum for the board should have been something measurable.
I guess there is something wrong with the way I am approaching the problem. Could you please help me identify this?
EDIT: I apologise: but the situation is like this - a skateboard moves on the ground with constant speed, until the cat is dropped from a tree. The cat lands on the skateboard and then proceeds with a new speed.
|
The spokes are under tension. That's the force toward the center. Provided by molecular bonds, to counteract the "centrifugal force".
Not just the spokes of course. The metal rim and the rubber tire also have molecular forces that oppose the pull to expand outward.
|
Think of it as a mass attached to a string and you apply a force to make it do circular motion. The force that you apply makes the wheel turn, the centripetal force and the centrifugal force cancel each other out. I think you are talking about the centripetal force when you say'net force towards the center'.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I have what I think is a very simple question, basically, what does the notation " $\langle \rangle$ " stand for?
My background is in math and I am not familiar with physics notations. I am reading the following:
"we assume that $\epsilon_k$ can be approximated as zero mean Gaussian measurement noise with $\langle \epsilon_j \epsilon_k \rangle=\sigma^2\delta_{jk}$ ."
From what I have found it seems like the $\langle \epsilon_k \rangle$ notation would indicate the mean, following the example above $\langle \epsilon_k \rangle=0$ . Yet I am not clear what to make of the $\langle \epsilon_j \epsilon_k \rangle$ . How is it defined?
Context
The $\{\epsilon_k\}$ with $k$ in $\{1,..,.n\}$ would refer to noise at each timepoint in the measurement of a particle trajectory. And it is assumed that
$\epsilon_k \sim Normal(0,\sigma^2)$
Reflection and question based on some the answers received
Based on the answers received I understand the $\langle \rangle$ notation to represent the expectation. And in this case:
$\langle \epsilon_k \rangle=0$ (the first moment) and,
$\langle \epsilon^2_k \rangle=\sigma^2$ (the second moment)
If the noise is not correlated between timepoints then:
$\langle \epsilon_i \epsilon_j\rangle=0$ for $i \neq j$
What I find confusing is that I could see this if we were talking about consecutive values:
$\langle \epsilon_i \epsilon_{i+1} \rangle= \frac{1}{n-1} \sum_{i=1}^{n-1}(\epsilon_i \epsilon_{i+1})$
But does the notation indicate the product at all possible intervals? What does it mean exactly to do, i.e. how would one calculate:
$\langle \epsilon_i \epsilon_j \rangle$
I guess if one wanted to calculate it one would need to know the distance between the $i$ 's and $j$ 's.
|
$\langle x\rangle$ refers to the expectation value of $x$ .
$\delta_{jk}$ is the Kronecker delta, defined as:
$$\delta_{jk}=\left\{\begin{align}0 && j\ne k \\ 1 && j=k\end{align}\right.$$
So this is a shorthand way of saying that for any $j$ , $\langle e_j^2\rangle=\sigma^2$ and that if $j\ne k$ , $\langle e_je_k\rangle=0$ . In other words, the RMS value of the noise is $\sigma$ for all time, and the value of the noise is uncorrelated between any two time points $j$ and $k$ .
|
Chris answer is correct. To put it into a math context think of $\epsilon_i$ as independent identically distributed random variables. Two random variables $\epsilon_i$ and $\epsilon_j$ differ if $i\ne k$ . Since they are independent we get
$$
\langle \epsilon_i \epsilon_j \rangle =:E[\epsilon_i \epsilon_j] = E[\epsilon_i] E[\epsilon_j] = 0
$$
where $E[.]$ is the expectation value.
However, if $i=j$ we get
$$
\langle \epsilon_i \epsilon_i \rangle =:E[\epsilon_i^2] = E[\epsilon_i^2] - \underbrace{E[\epsilon_i]^2}_{=0} = Var[\epsilon_i] =: \sigma^2
$$
where $Var[.]$ is the variance.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I want to know what do we actually measure in a weight machine, true weight or apparent weight? Please help me in understanding this concept.
|
A weighing machine measures the force exerted by a body on the weighing machine.
Newton's third law then predicts that there is a force of the same magnitude and opposite in direction acting on the body producing the force.
On the Earth if the weighing machine and the body are not accelerating (ignoring the rotation of the Earth) then the reading on the weighing machine will be the weight of the body.
If the weighing machine and the body are accelerating then you could call the reading on the weighing machine the apparent weight of the body.
So including the effect of the rotation of the Earth it is only at the geographic poles that reading on the weighing machine is the weight of the body.
Elsewhere on the Earth the reading on the weighing machine will be lower than at the poles so you could call that the apparent weight.
The difference between these readings is small.
If the weight of the body is $10 \, \rm N$ then with the weighing machine and the body in a stationary lift, or a lift moving at constant velocity upwards or downwards the reading on the weighing machine would be $10 \, \rm N$ which is the weight of the body.
If the weighting machine and the lift had an upward acceleration of $5 \,\rm m s^{-2}$ then the reading on the weighing machine would be $15 \, \rm N$ and you could say that the apparent weight of the body was $15 \, \rm N$
If the weighting machine and the lift had a downward acceleration of $5 \,\rm m s^{-2}$ then the reading on the weighing machine would be $5 \, \rm N$ and you could say that the apparent weight of the body was $5 \, \rm N$
If the weighting machine and the lift had a downward acceleration of $10 \,\rm m s^{-2}$ then the reading on the weighing machine would be $0 \, \rm N$ and you could say that the apparent weight of the body was zero - the body appeared to be weightless.
The definition of weight that I have used is that the weight of a body is the force on the body due to the gravitational attraction of the Earth.
However others define the weight of a body as the reading on a weighing machine as explained by Walter Lewin in one of his 8.01 Classical Mechanics lectures .
Using this definition a body is weightless when it is in free fall.
|
I want to know what do we actually measure in a weight machine, true weight or apparent weight?
What a "weighing machine" measures depends on the nature of the machine. A spring scale measures the compression of a spring that balances out the forces and torques exerted by the object on the spring pan. Assuming no torque, the force exerted by the test object on the pan is the downward normal force. The force exerted by the pan on the test object is the equal-but-opposite upward normal force. This is test object's apparent weight (ignoring buoyancy). The compression of the spring is thus an analog for apparent weight. Assuming a Hookean spring, this compression can readily be converted to apparent weight. There are a number of other types of devices that effectively measure apparent weight.
A balance scale measures the ratio of the apparent weight of a test object versus that of an object with a known mass. Since apparent weight is proportional to mass, a balance scale measures the test object's mass. A spring scale on the Moon will register about 1/6 of the value the scale would on the Earth for a given test object. A balance scale on the Moon will register more or less the same as that on the Earth for a given test object, "more or less" because the reduced gravity will increase the measurement error. A spring scale in the Space Station will register zero, more or less. A balance scale won't work in the Space Station because 0/0 is indeterminate.
What about true weight? A spring scale on a zero-g airplane flight will register a weight ranging between zero and twice the object's typical weight. The variation in the object's true weight will be very small; true weight decreases by about 0.0003086 m/s 2 per kilometer of altitude above sea level. A spring scale measures apparent weight rather than true weight.
There's a difference between apparent weight and true weight, even for an object at rest on the surface of the Earth. The Earth's rotation means that an object on the surface of the Earth undergoes a constant magnitude acceleration toward the Earth's rotation axis of about $R_E\cos\phi$ from the Earth's rotation axis, where $R_E$ is the Earth's radius and $\phi$ is latitude. This acceleration means that true weight of an object at rest on the surface of the Earth exceeds the object's apparent weight by about $m R_E\Omega^2\cos\phi$, where $m$ is the object's mass and $\Omega$ is the Earth's sidereal rotation rate.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I have the following situation in mind:
A big airtight bag of arbitrary shape with a person standing on it. The bag gets inflated with air to lift the person.
Assuming that the bag is much larger than the persons footprint, how do I find the minimal overpressure in the bag that I need to lift the person of the ground?
I was thinking of just dividing the normal force of the standing person by the footprint area, but I am not sure on that approach
$$F_n = 80×9.81 = 784\text{ N}$$ $$P_n = \frac{784}{0.2×0.3} = 13066\text{ Pa}$$
I have the feeling that the bag dimensions play a role as well, as intuitively I would say that to do this, a small bag would work better than a big bag, but again I'm not sure...
|
It is just as simple as you suggest. At the moment my feet are exerting a pressure on the ground of my weight divided by whatever the area of my shoes is and the pressure exerted by the ground on me is what keeps me stationary. Exactly the same applies to your air bag. once the air pressure is the same as the pressure you exert on the bag it will support you.
But there are a couple of extra things to consider. When you stand on the bag you will compress the air in it and you'll sink until the air is compressed enough to match the pressure of your shoes. So the initial pressure can be lower than your shoe pressure and the bag can still keep you off the ground.
You mention the bag size, it's probably easier to compress the gas a lot in a small bag than in a large bag, so a small bag would probably work better. There's nothing especially fundamental about this; it's just that a large bag allows more room for the air to move into as your feet compress it.
|
According to this website it is not recommended for the object being lifted to have only a small footprint on the inflating bag:
You can see that with a small (not recommended) footprint, the surface tension of the bag (which apparently is quite stiff) will contribute to the lifting of the object (the "sling effect" mentioned in the image). These bags have multiple layers of rubber and are reinforced with either strong synthetic fibers (aramid) or steel cables.
Your calculation of 13066 Pa is correct, but it will really be the upper limit to the true amount of pressure needed to lift the person. Surface tension of the bag material will effectively increase the area that is supplying the lift and thus a lower pressure will be suffice. The exact pressure needed is impossible to calculate without detailed knowledge about the bag material, it's properties and the height you want to lift the person.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I just bought a molecular modeling kit for my toddler ("Snatoms").
To use the kit to play with her, I want to start out by memorizing a list of all molecules (isomers) fitting these criteria:
only contains C, H, and/or O
contains a maximum of 6 atoms
I started out by noodling around with Lewis diagrams and wikipedia... but then I ran into the various isomers of C2H2O2:
I wanted to do this search on a chemical database (like chemspider.com) but I couldn't find a criterion that would filter the search by number of atoms in the molecule (isomer).
Is there a name for this "number"?
|
The "number of atoms"'s name is... the number of atoms. I am not aware of any special word for that. Databases may not let you search/filter that kind of criterion, i.e. the total number of atoms, but usually let you search by "chemical formula" which is, in your case, C2H2O2.
I have just tried with Chemspider by typing "C2H2O2" in the search box and it disappointingly gave me just 3 results, i.e. glyoxal (OHC-CHO, not in your list), 42879-41-4 and 16005-17-7.
DID YOU KNOW? Just to expand the scope, did you know that the last digit in CAS# is actually a check digit? It helps any software checking that you did not mistyped the number.
|
So this addresses the part about going from (number of atoms, element symbols) -> set of structure spaces. For this case, the input is (6, "CHO").
The steps I would go through are:
List all the (weak?) compositions of 6 - e.g. [3, 2, 1], [2, 3, 1], [2, 2, 2], ...
Turn each of these into a formula like C3H2O, C2H3O, C2H2O2, ...
List all the structures with that formula
Of course, step 3 might involve looking them up in a database to see if they are real structures or not.
For step 1, a 'composition' is a breakdown of a number into parts (like a partition) but where the order matters. So [3, 2, 1] is not the same as [2, 3, 1]. A weak composition allows zeros, so you can have [3, 3, 0] which means C3H3 - but that depends on whether you want to include all atoms or not.
|
HuggingFaceH4/pmp-stack-exchangedata/chemistry.stackexchange.com
|
If I use Lorentz transformations,
\begin{align}
x' &= \gamma (x-vt), \\
t' &=\gamma \left(t-\frac{vx}{c^2}\right),
\end{align}
I need $x,v,t$ to calculate $x'$ and $t'$ . If I only know, say for example, $x$ and proper time $t'$ , I can calculate the relative velocity of the frames, $x'$ and $t$ by using length contraction and $v=x/t=x'/t'$ . But how do I derive these quantities directly with Lorentz transformation? Is this possible? Even if I try to use the constant space-time-distance, it doesn't work out. In general I'm confused why Lorentz transformations are so important because it seems to me like one can calculate the same things with less effort by length contraction and time dilatation?
|
In general I'm confused why Lorentz transformations are so important because it seems to me like one can calculate the same things with less effort by length contraction and time dilatation?
This is not correct. The Lorentz transformations include length contraction, time dilation, and the relativity of simultaneity. Most of the so-called "paradoxes" of SR center around the relativity of simultaneity. So if you use only length contraction and time dilation then you will get most of the "paradoxes" wrong.
The Lorentz transform is an essential tool for SR, and (in my opinion) the simplified length contraction and time dilation formulas should be avoided for new students. They frequently misuse them and there is no need for them since they automatically drop out of the Lorentz transform whenever appropriate.
I can calculate the relative velocity of the frames, x′ and t by using length contraction and v=x/t=x′/t′.
No, in general it is not true that $v=x/t$ . If you happen to know that it is true for a specific scenario then you can use that fact also, but you cannot assume it in general.
In general, it depends on what you want to know. You have two equations in 5 variables ( $c$ is not a variable and $\gamma$ is just a function of $v$ so it isn't an independent variable). So if you want to determine the coordinates of a specific single event $(t',x')$ then you need three pieces of information. However, if you only want to determine, for example, the coordinates of a worldline $(t',f(t'))$ then you may only need two.
Of course, the problem itself may introduce new unknowns such as equations of motion or other new variables. There is thus no one universal answer to the question.
Small nitpick: the $t'$ used in the Lorentz transform is not proper time, it is just coordinate time in the primed frame.
|
The direct Lorentz transformations are :
$$ \begin{cases} x'=\gamma \left(x-vt\right)\;\;\;\;\;(1)\\ t'=\gamma \left(t-\frac{v}{c^{2}}x\right)\;\;\;\;(2)\end{cases}$$
(1) and (2) gives $$\begin{cases}v=(x-\gamma^{-1}x')/t\;\;\;\;\;(1')\\\\v=(t-\gamma^{-1}t')c^{2}/x\;\;\;\;(2') \end{cases}$$
To be rigorous, we have to work with differences, i.e. with
$$\begin{cases}v=(\Delta x-\gamma^{-1}\Delta x')/\Delta t\\v=(\Delta t-\gamma^{-1}\Delta t')c^{2}/\Delta x \end{cases}$$
but as usual we pose: $t=t'=0$ in $x'=x=0$ , which does not change anything to calculate v directly.
we can see that the equation (2) is only the equation (1) for the light, i.e. $x'=ct'$ and $x=ct$ !!!* ,so $\;\frac{x'}{t'}=\frac{x}{t}=c$
(2') becomes $$v=(x/c-\gamma^{-1}t')c^{2}/x $$
with (x,t') as data, and the unknowns (t,x') follow from the relation $t=x/c \;,\; x'=ct'$
(*) In reality, it's so that $\;x'^{2}-ct'^{2}=0$ $\Rightarrow$ $ x'-ct'=0\;$ or $\;x'+ct'=0$ and $\;x^{2}-ct^{2}=0$ .....
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
In Hawking's famous paper "particle creation by black holes", he expands the real scalar field $\phi$ in two ways. I am confused why he makes the choice he does in the second way.
First in the distant past (before the star has collapsed), he expands the field in (2.3) as
$$
\phi = \sum_{i} \big( f_i \mathbf{a}_{i} + f_i^{\ast} \mathbf{a}_{i}^{\dagger} \big)
$$
for some mode functions $f_{i}$ and ladder operators $\mathbf{a}_{i}$ .
In the second way, in the distant future (after the star has collapsed and so a black hole horizon has formed and so on), he expands the field in (2.4) as
$$
\phi = \sum_{i} \big( p_i\mathbf{b}_{i}+p_i^{\ast}\mathbf{b}_{i}^{\dagger}+q_i\mathbf{c}_{i} + q_i^{\ast} \mathbf{c}_{i}^{\dagger} \big)
$$
for some mode functions $p_{i}$ , $q_{i}$ and ladder operators $\mathbf{b}_{i}$ , $\mathbf{c}_{i}$ .
My question boils down to: in the second expansion, why do we need two types of ladder operators $\mathbf{b}_{i}$ and $\mathbf{c}_{i}$ ? There seem to be two Fock spaces being referred to here?
In the text below (2.4) goes on to say that $p_i$ are mode functions which are purely outgoing (with zero Cauchy data on the event horizon), and $q_{i}$ are solutions which contain no outgoing component (with zero Cauchy data on $\mathscr{I}^{+}$ , the distant future aka. future null infinity).
Why do the outgoing and incoming modes (at the horizon) seem to require separate Hilbert spaces? I do not understand this from Hawking's paper. Does the presence of the horizon split up your Hilbert space in a way analogous to how this happens in Rindler space (there, you need different ladder operators in the right and left Rindler wedges, and so this reminds me of this).
|
The answer is really quite simple: the field operator must be expanded in terms of a complete set of modes.
"Complete" in this case means "forming a complete basis for classical solutions on a Cauchy surface for the problem." Here ${f_i}$ is complete on Cauchy surface $J_-$ in Hawking's collapse spacetime, and ${p_i} \cup {q_i}$ is complete on Cauchy surface $J_+ \cup EH$ . So the field operator can be expanded in terms of either set.
The Hilbert space is the same in both cases, just expressed in different bases. It's not so different from, say, relating $e^{ikx} \leftrightarrow {a^\dagger_k}$ for one complete set, while using $\sin(lx) \leftrightarrow {b^\dagger_l}$ and $\cos(lx) \leftrightarrow {c^\dagger_l}$ ( $l>0$ ) for another in flat spacetime. (Note this is just illustrative, taken literally it would violate the positive frequency condition.)
So at a formal level, the answer has little to do with the existence of a horizon. It is merely convenient to split the late time modes into two subsets, each with a simple behavior at the horizon. You could just as well use a single letter to label all the modes, and split them up by (say) odd and even indices.
You might consult Dewitt 1975 for a nice review of the formalism.
|
All of the operators shown in the question act on the same Hilbert space. The operators $\mathbf{a}$ can be expressed in terms of $\mathbf{b}$ and $\mathbf{c}$ , and the operators $\mathbf{b}$ and $\mathbf{c}$ can be expressed in terms of $\mathbf{a}$ . This is implicit in the fact that the field operator $\phi$ can be expressed either in terms of $\mathbf{a}$ (Hawking's equation (2.3)) or in terms of $\mathbf{b}$ and $\mathbf{c}$ (Hawking's equation (2.4)), and Hawking shows it explicitly in equations (2.7)-(2.8).
The reason Hawking introduces these different ways of writing the same thing is explained on the same page as those equations:
In (2.3), the field operator is expressed using mode functions that have simple behavior on past null infinity, and he mentions that the fields are completely determined by specifying initial conditions there.
In (2.4), the field operator is expressed using mode functions that have simple behavior on future null infinity, but specifying data only one future null infinity is not sufficient, because the spacetime also has a future event horizon. Hawking chooses to use different symbols ( $\mathbf{b}$ and $\mathbf{c}$ ) for the operators associated with data specified on future null infinity and the horizon, respectively. Together, these operators generate the full algebra, just like the original operators $\mathbf{a}$ do.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Is this a consequence of planet formation in accretion disks ?
|
I feel that HDE gives a good start of an answer, but stops short of the important part. We have seen in HDE's answer the formation of a star at the center of the collapsing molecular cloud. When the star begins to fuse lighter elements, the protoplanetary disk has several forces acting on it:
The momentum of the particles in the disk.
The gravity of the star in the center and other particles of the protoplanetary disk on the other side.
The gravity of the particles of the protoplanetary disk opposite the sun (outward).
The radiation pressure of the new star.
Interestingly, the second and third forces pretty much cancel out when the disk is still uniform in distribution. But once large clumps of matter accumulate those large masses (protoplanets) gravitationally pull at the other matter in inconsistent and sometimes violent ways, especially when multiple protoplanets align (once per orbit of the inner body).
Thus, the area of the protoplanetary disk is being 'swept up' of all the mass: some is pulled into the sun by gravity and loss of momentum due to collisions, some is pushed out of the solar system by radiation, and whatever matter was not disturbed by one of those processes is then subject to being disturbed by the gravity of the protoplanets themselves. The protoplanets will, over time, either absorb that matter or gravitationally slingshot that matter out of the system or too it's very edges .
In short, the protoplanetary disk within a few tens of AU of the star in the center is a chaotic mess of a place . Not much matter can form a stable orbit there.
|
It's because of the Sun.
It might be good if I give a quick overview on star formation before I get to the meat of the issue. Here's star formation in a few simple steps:
Giant Molecular Cloud forms. A large region of gas and dust, essentially a dense version of the interstellar medium, coalesces into an interstellar cloud. GMCs can be tens or hundreds of light-years across, enough to give birth to many stars. Within the GMC, some regions will be slightly denser than others.
A portion of the cloud collapses. A certain region of the GMC collapses, generally due to an outside disturbance. The most commonly cited cause is a supernova shockwave that compresses portions of the GMC, although close passes between galaxies have been known to incite star formation . My favorite example is the Cartwheel galaxy .
The region heats up. There is quite a lot of matter pressing in on what has now become a protostar, and so it heats up. Eventually, conditions become such that hydrogen fusion is possible. The protostar , now a pre-main-sequence star , begins to shine.
A protoplanetary disk forms. At this point, the star dominates this region of the GMC. Matter nearby is pulled towards it by the force of gravity, and a circumstellar disk forms. It may be composed of gas and dust. Eventually, small grains of dust collide and form bigger grains. Planetesimals form, then protoplanets , and finally planets.
The reason that there isn't more matter in a given stellar system is that the star dominates the surrounding area. It pulls in nearly everything around it during its early life. Much of the region of the collapsing cloud is made of molecular $H_2$, and so it is pulled in and used for fusion.
Now the question translates to 'Why isn't the protoplanetary disk more massive'? The answer is that when the disk formed, much of the matter that was in its inner reaches spiraled into the Sun. This is partly due to the Poynting-Robertson effect , where photons from the Sun pull dust grains in. Over billions of years, the star can accumulate much of the matter than was originally close to it in the disk.
|
HuggingFaceH4/pmp-stack-exchangedata/astronomy.stackexchange.com
|
I've read this post , but I wanted more clarification for a broader question.
In Keras, there are now three types of regularizers for a layer: kernel_regularizer , bias_regularizer , activity_regularizer .
I have read posts that explain the difference between L1 and L2 norm, but in an intuitive sense, I'd like to know how each regularizer will affect the aforementioned three types of regularizers and when to use what.
The motivation for my question is that my understanding is that regularizers are usually applied to the loss function. However, they're even being added to bias term. I'm not able to wrap my head around why one would think to do this, let alone be able to discern when to use L1 and L2 for the bias regularizer. Hence, I wanted to get an overall understanding of all three entities that regularizers are applied on and in general know how the 2 kinds of regularizers can affect each of those entities at a high level.
|
What is the difference between them?
You have the regression equation $y = Wx+b$ , where $x$ is the input, $W$ the weights matrix and $b$ the bias.
Kernel Regularizer: Tries to reduce the weights $W$ (excluding bias).
Bias Regularizer: Tries to reduce the bias $b$ .
Activity Regularizer: Tries to reduce the layer's output $y$ , thus will reduce the weights and adjust bias so $Wx+b$ is smallest.
When to use which?
Usually, if you have no prior on the distribution that you wish to model, you would only use the kernel regularizer , since a large enough network can still model your function even if the regularization on the weights are big.
If you want the output function to pass through (or have an intercept closer to) the origin, you can use the bias regularizer .
If you want the output to be smaller (or closer to 0), you can use the activity regularizer .
$L_1$ versus $L_2$ regularization
Now, for the $L_1$ versus $L_2$ loss for weight decay (not to be confused with the outputs loss function).
$L_2$ loss is defined as $w^2$
$L_1$ loss is defined as $|w|$ .
where $w$ is a component of the matrix $W$ .
The gradient of $L_2$ will be: $2w$
The gradient of $L_1$ will be: $sign(w)$
Thus, for each gradient update with a learning rate $a$ , in $L_2$ loss, the weights will be subtracted by $aW$ , while in $L_1$ loss they will be subtracted by $a \cdot sign(W)$ .
The effect of $L_2$ loss on the weights is a reduction of large components in the matrix $W$ , while $L_1$ loss will make the weights matrix sparse, with many zero values. The same applies to the bias and output respectively using the bias and activity regularizer.
|
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero.
In the case of $L2$ regularization, the gradient of a single weight is given by
$$ \delta w = u - 2pw$$
where $u$ is the input from the previous layer being multiplied by weight $w$ , and $p$ is parameter weighting the $L2$ penalty.
Without loss of generalization, assume that $u>0$ and $w>0$ .
Then the sign of $\delta w$ is given by
$$ sign(\delta w) = sign(\frac{u}{2p} -w)$$
showing that $L2$ regularization will drive $w$ to grow bigger if $w$ drops below $\frac{u}{2p}$ .
On the other hand, in the case of $L1$ regularization, the gradient of a single weight is given by
$$ \delta w = u - p$$
so the sign of $\delta w$ is given by
$$ sign(\delta w) = sign(u-p)$$
showing that $L1$ regularization will drive $w$ to grow smaller when the input $u$ is smaller than the $L1$ regularization parameter $p$ .
Effectively, $p$ is functioning as a threshold such that, whenever $u$ is less than $p$ , $L1$ regularization will push the weight to grow smaller, and whenever $u$ is greater than $p$ , $L1$ regularization will push the weight to grow larger.
The above is a local linear approximation of a nonlinear system: $u$ is actually an average over, for example, all the samples in a batch, and $u$ also changes with with each update. Nevertheless, it gives an intuitive understanding of how $L1$ regularization tries to drive some weights to zero (given large enough $p$ ).
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
For example (and this is not my use case), imagine there is a set of 50 000 random natural numbers represented in decimal with 8 digits (in the range 00 000 000 to 99 999 999). We could index the set with an index for each digit position (i.e. an array of pointers to all numbers ordered by their fist digit, another one ordered by the second digit, etc.). Imagine we wanted to know what numbers are in the list with at most 2 digits differing from 01 234 567.
We could start by finding the combinations of positions where the digits could differ. Then, for each combination, we could search the set for numbers with all the digits equal to 01 234 567 except for the 2 digits we chose. While it is fast to find the set of numbers where the first digit is 0 thanks to the index, it is, AFAICT, very inefficient to intersect that set with the set of numbers where the second digit is 1, and it is more efficient to directly check what numbers that start with 0 also match the rest of the condition. This is also not very efficient since it only reduces the search space to around one tenth of the entire set. In this case, we only need to do this search 3 times with 3 different indexes since any combination of 3 positions will never be completely covered by the combination of 2 positions where the digits could differ, but it's still quite slow.
For my use case:
The number of "digits" is variable but less than 30
The number of "digits" which can be different can be any number from 0 to the number of "digits"
There are around 400k "numbers" in total
The smaller the search time and memory/storage usage, the better.
Is there any way to perform the intersection efficiently? Or is there a more appropriate data structure for indexing the list?
|
The name that I normally use for this is "Hamming distance search".
So... 50,000 numbers isn't that many, and probably the simplest solution is to store them in a trie. Hamming distance search on a trie is extremely easy: you conceptually traverse all paths in the trie, pruning when the Hamming distance between the query and the path exceeds the threshold (2 in your case).
Note that if the key is a number (as it is in your case), each node in the trie represents a range of numbers. So if you had a data structure which queries intervals (i.e. it tells you how many numbers are in the set that are between two given query numbers), you can essentially do the same type of search that you would do on a trie. A rank/select index would be appropriate here, but even a sorted array with binary search might be efficient enough on 50,000 numbers.
|
You haven't listed any constraints on the amount of space used, so a simple solution is to build ${8 \choose 2} = 28$ indices. Each index lets you find whether a number is in the set, based on the value of 6 of the 8 digits. Each index could be stored as a hashtable, that given a 6-digit number (the value of the digits in those 6 positions) tells you whether that number is in the set; or it could be stored as a sorted array, with lookups done by binary search.
This enables fast lookup. Given a 8-digit number like 01234567, you enumerate over all 28 possibilities for the pair of digits that might differ, and look it up in each index. This requires 28 index lookups, which should be very fast.
There are more sophisticated solutions available, e.g., if you have very long strings, but this might be good enough for the parameters you list.
|
HuggingFaceH4/pmp-stack-exchangedata/cs.stackexchange.com
|
The uncertainty principle between the position $x$ and the momentum $p$ is given by: $$ \sigma_x \sigma_p \geq \hbar/2,$$ whereas for the $x$ and $y$ components of the angular momentum is given by:
$$ \sigma_{L_x} \sigma_{L_y} \geq \frac{\hbar}{2}\langle L_z\rangle .$$
What is the physical meaning of the Right Hand Side being just a number or an expectation value?
EDIT: I realise the expectation value itself is just a number, but it can take several different values, as opposed to a constant
|
The Heisenberg uncertainty principle in the most general form
$$\Delta_\omega(A)\Delta_\omega(B)\geq\frac12|\omega([A,B])|$$
depends on the state $\omega$ on which it is evaluated. In the special case of the canonical commutation relations $[q,p]= i\hbar I$, $\omega(I)=1$ for any state and therefore the RHS reduces to a constant. For more general commutators however this won't be the case and a dependence on the state $\omega$ will remain.
|
The uncertainty product is bounded from below by the expectation value of the commutator of the relevant observables. If $A$ and $B$ are any two observables, then the generalized Heisenberg uncertainty relation reads as
$$ \sigma_A\sigma_B \geq \frac{1}{2}\vert \langle[A,B]\rangle\vert .$$
For the case of position - linear momentum pair, the commutator is $[x,p]=i\hbar$ and so the right-hand side of the inequality above becomes independent of the state.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Suppose my system involves:
1) A mounted wheel with some outward flap
2) A bullet already in motion
Initially the net angular momentum is 0 and the net kinetic energy is just that of the speeding bullet.
The bullet hits the flap, causing the wheel to turn, and continues on (slightly slower).
Now the net angular momentum of the system is > 0 and the net kinetic energy is lower.
1) Is energy being converted into angular momentum here (so net energy is conserved)?
2) How is the net angular momentum of this system being conserved with the net amount before/after has changed?
|
I'll address one underlying issue.
It's important to remember that objects moving in straight lines can have angular momentum. Your bullet can, for example.
The definition of angular momentum $\vec L$ for some point object is:
$$\vec L \equiv \vec r \times \vec p.$$
In that definition, $\vec r$ is the position vector of your object and $\vec p$ is the momentum of the object. So as long as the cross product of the position and momentum vectors is non-zero, something moving in a straight line can have angular momentum.
Now, there are other expressions for angular momentum. You may have seen $\vec L = I \vec\omega,$ which is quite useful for spinning objects. This is actually a special case that can be derived from the definition above.
|
So at the moment that the bullet hits the outward flap, the bullet will decelerate and the wheel will accelerate (at the moment of impact there is conservation of energy and momentum).
Now if the wheel was free floading (for example a bullet hit a wheel in space) the weel would start moving (and probably turning) in the direction of the bullet and the conservation of energy and momtum would be (somewhat) obvious.
But my guess is that you imagine a wheel on earth, that is mounted onto some (frictionless) turning mechanism. In that case we still have conservation of energy, but NO CONSERVATION OF ANGULAR MOMENTUM. The fact that the mount forces the weel to stay in place (and hence the flap to start turning) changes the angular momentum (the mount exerts an external force onto the system).
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Theorem 12 of the following link asserts the following:
$\textbf{Theorem.}$ Let $\chi \in X_{N}$ with $\chi \neq \epsilon$. There exists $C > 0$ such that $$L(s,\chi) = L(1,\chi) + O(s-1)$$ as $s \to 1^{+}$. In particular, $$\lim_{s \to 1^{+}} L(s,\chi) = L(1,\chi).$$
The proof is as follows: Let $1< s < 2$. From the proof of $\textbf{Theorem 9}$ we have $$L(s,\chi) - L(1,\chi) = \sum\limits_{n=1}^{\infty} a_{n} \Biggl[\biggl(\frac{1}{n^s} - \frac{1}{(n+1)^{s}}\biggr) - \biggl(\frac{1}{n} - \frac{1}{n+1}\biggr)\Biggr]$$ where the sequence $\{a_{n}\}$ is bounded. Applying the mean value theorem to the function $s \mapsto n^{-s} - (n+1)^{-s}$ gives a sequence $\{s_{n}\}$ with $1 < s_{n} < s$ and $$L(s,\chi) - L(1,\chi) = (s-1) \sum\limits_{n=1}^{\infty} a_{n} \Biggl[\frac{\log\:(n+1)}{(n+1)^{s_n}} - \frac{\log\:(n)}{n^{s_n}} \Biggr] \qquad \qquad \cdots\cdots (1)$$
I don't understand how $(1)$ is derived. When I applied the Mean-Value-Theorem to the function $f(s)=x^{-s} - (x+1)^{-s}$ on $[n,n+1]$ i get $$f'(s) = -x^{-s}\log\:(x) + (x+1)^{-s}\log\:(x+1).\hspace{40pt}(\ast)$$ So by the Mean-Value-Theorem i get an $s_{n} \in (n,n+1)$ such that $$f'(s_{n}) = -n^{-s_n}\log\:(n) + (n+1)^{-s}\log\:(n+1) - (n+1)^{-s_n}\log\:(n+1) + (n+2)^{-s_n}\log\:(n+2)$$ which gives
\begin{align*}
f'(s_{n}) &= \frac{\log(n+2)}{(n+2)^{s_n}} - \frac{\log\:(n)}{n^{s_n}} \\\ &= \frac{f(b)-f(a)}{b-a} = (n+1)^{-s} - (n+2)^{-s} - (n+1)^{-s} + n^{-s} \\\ &= \frac{1}{n^s} - \frac{1}{(n+2)^s}
\end{align*}
Am I making a mistake. I am not able to see how the author get's to that step.
Are there any other nice proofs of the above theorem which you people would like to recommend?
|
Since you edited the question but did not say that it is clear now, I assume you are hoping for some details in addition to what Ralph said. So:
Let $f_n(s) = \frac{1}{n^s}- \frac{1}{(n+1)^{s}}$ the derivative of this with respect to $s$ is as you computed $\frac{-\log n}{n^s}+ \frac{\log(n+1)}{(n+1)^{s}}$. This is a function of $s$, and you do not consider it on $[n,n+1]$, but rather $[1,s]$ for each $n$.
The MVT tells you that $\frac{f_{n}(s) - f_n(1)}{s -1} = f'(s_n) $ for some $s_n\in [1,s]$.
Multiplying by $(s-1)$ and plugging in the explicit expressions for the functions this means
$$(\frac{1}{n^s}- \frac{1}{(n+1)^{s}}) - (\frac{1}{n} -\frac{1}{n+1}) = (s-1)(\frac{-\log n}{n^{s_n}}+ \frac{\log (n+1)}{(n+1)^{s_n}}).$$
The left hand side appears in the original sum, and the result is obtained by instead plugging in the right hand side.
|
It is a standard fact that $L(s,\chi)$, initially defined as a locally uniformly convergent Dirichlet series in $\Re s>1$, extends to a holomorphic function on $\mathbb{C}$. See for example Chapter 9 in Davenport: Multiplicative Number Theory, especially the first half of Page 69.
Now basic complex analysis tells us that the Taylor series of $L(s,\chi)$ around $s=1$ converges absolutely on $\mathbb{C}$, of which
$$ L(s,\chi)=L(1,\chi)+O(|s-1|) $$
is a consequence. In fact for the last equation we only need to know that $L'(s,\chi)$ is continuous on $\mathbb{C}$, which follows directly from Cauchy's integral formula.
A quick proof of the holomorphicity of $L(s,\chi)$ in $\Re s>0$ follows from the fact that the partial sums $\sum_{n\leq x}\chi(n)$ are bounded. See Proposition 9 in Section VI.2 of Serre: A course in arithmetic, or Theorem 1.3 in Montgomery-Vaughan: Multiplicative number theory I.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Here is an image of a muffler of an R/C aircraft engine. It's made of aluminium alloys. What do you think about the welding technology which is used to make this muffler? I can't see any welding bead.
|
No conventional welds are visible. It could have been made with a furnace solder. Flux and solder are placed at joints and the unit is put into a furnace and heated; the solder melts and flows into gaps by capillary action. I am not sure a solder would work well depending on the temperature the muffler reaches. A zinc aluminum solder flows at roughly 700 F but would have low strength at about 400F. Conventional engine exhaust manifolds reach 1200 F.
|
Cheaper than soldering would be press-fitting the parts together. To do this, at the joint where the (for example) end plug goes into the muffler cylinder, the plug diameter is slightly larger than the inside diameter of the cylinder. When you smash them together with great force, they slide together to form a joint that is nearly impossible to pull apart, yet requires no screws, bolts, welding, soldering or glue to hold it together- and it is fast and cheap to perform in a factory.
|
HuggingFaceH4/pmp-stack-exchangedata/engineering.stackexchange.com
|
So I want to describe the premise better. By a helicopter I mean something that can stay suspended in air perfectly and can move around in any direction freely. To generalize the motion of the helicopter mathematically, assume that the position of the helicopter in each dimension is a continuous and differentiable function. You can assume that the platform is a square plate with 4 strings from each vertex that are connected to a single point at the bottom of the helicopter. Assume ideal conditions like the wind of the helicopter blades doesn't affect the platform and that there's no air resistance or friction. At t=0, the platform is perfectly horizontal and the helicopter, the ball and the platform are at rest.
So the question is, what does it take for the helicopter to drop the ball? In my opinion it's certainly harder than it seems. If it starts to accelerate uniformly and slowly in a particular direction, the platform could tilt of course, but in the frame of reference of the platform, the forces on the ball will be balanced perfectly by the pseudo force due to the acceleration.
One condition in which the ball would fall is if it the downward acceleration of the helicopter would exceed the acceleration due to gravity. In that situation the ball would "levitate" from the platform's perspective and then the helicopter can just move away horizontally and drop the ball. But I don't know if that's a complete answer.
Also, how would the answer be affected if the strings couldn't "bend"? As in they were rigid beams that could move around just as freely?
|
There is not one single special potential function, rather the opposite. The potential function is a placeholder that takes a different functional form depending on what kind of physical situation you want to model. The physics and the system that we want to describe goes into the Schrödinger equation via this potential function.
The only information that the equation gives you written in this way is the fact that it has to be a function depending only on the position variable. The function $V(x)$ may not depend on derivatives of $x$ for example.
Some basic examples for potentials are the particle in a box potential,
$$
V(x) = \cases{ 0, \ -L/2 < x < L/2 \\\infty, \ \textrm {otherwise} }
$$
With this we can model situations where a particle can move freely in a certain area, but unable to escape.
Another potential would be a harmonic potential,
$$
V(x) = \frac{1}{2}m\omega^2x^2
$$
With this we can model situations where a particle is for example resting in a local minimum that looks like a parabola. This can describe for example molecules in their stable groundstate geometry. Another example that is described by a harmonic potential would be the time dependent amplitudes of the electromagnetic vector potential.
Potential functions are also often so complicated that we are only able to obtain approximate solutions.
|
The potential energy in Schrödinger equation is the electrostatic one. Here are a few points to note:
Since the Schrödinger equation is used on a microscale, we exclude the "microscopic" kinds of potential energy familiar from Newtonian mechanics, such as, e.g., the "elastic potential energy", which are really result of the electrostatic interaction between many particles.
This leaves us with four fundamental interactions acting on the particle level: electromagnetic, strong, weak and gravitational.
I am not sure whether there is a generally accepted theory of gravity
on quantum level, so I would say that gravitational forces never
appear in the Schrödinger equation.
Weak and strong interactions, in principle, could appear in the Schrödinger equation, but a) they are rarely reducible to a purely potential interaction, b) they are usually treated using more sophisticated mathematical techniques, and c) they are often treated in relativistic limit, wwhere the Schrödinger equattion does not apply.
This leaves us with electromagnetic interactions, i.e., the scalar and the vector potentials. Thus, the potential energy in question is
$$
V(\mathbf{r}) = -e\varphi(\mathbf{r}),
$$
since the particle in question is usually an electron.
This is sufficient for describing the physics of atoms and condensed matter in non-relativistic limit, and taking properly into account the exchange interaction.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Might anyone be able to explain the difference between:
Algebraic Datatypes (which I am fairly familiar with)
Generalized Algebraic Datatypes (what makes them generalized?)
Inductive Types (e.g. Coq)
(Especially inductive types.) Thank you.
|
Algebraic data types let you define types recursively. Concretely, suppose we have the datatype
$$
\mathsf{data\;list = Nil \;\;|\;\; Cons\;of\;\mathbb{N} \times list}
$$
What this means is that $\mathsf{list}$ is the smallest set generated by the $\mathsf{Nil}$ and $\mathsf{Cons}$ operators. We can formalize this by defining the operator $F(X)$
$$
F(X) == \{ \mathsf{Nil} \} \cup \{ \mathsf{Cons}(n, x) \;|\; n \in \mathbb{N} \land x \in X \}
$$
and then defining $\mathsf{list}$ as
$$
\mathsf{list} = \bigcup_{i \in \mathbb{N}} F^i(\emptyset)
$$
A generalized ADT is what we get when define a type operator recursively. For example, we might define the following type constructor:
$$
\mathsf{bush}\;a = \mathsf{Leaf\;of\;}a \;\;|\;\; \mathsf{Nest\;of\;bush}(a \times a)
$$
This type means that an element of $\mathsf{bush\;}a$ is a tuple of $a$s of length $2^n$ for some $n$, since each time we go into the $\mathsf{Nest}$ constructor the type argument is paired with itself. So we can define the operator we want to take a fixed point of as:
$$
F(R) = \lambda X.\; \{ \mathsf{Leaf}(x) \;|\; x \in X\} \cup \{ \mathsf{Nest}(v) \;|\; v \in R(X) \}
$$
An inductive type in Coq is essentially a GADT, where the indexes of the type operator are not restricted to other types (as in, for example, Haskell), but can also be indexed by values of the type theory. This lets you give types for length-indexed lists, and so on.
|
Consider algebraic datatypes such as:
data List a = Nil | Cons a (List a)
The return types of each constructor in a datatype are all the same: Nil and Cons both return List a . If we allow the constructors to return different types, we have a GADT :
data Empty -- this is an empty data declaration; Empty has no constructors
data NonEmpty
data NullableList a t where
Vacant :: NullableList a Empty
Occupied :: a -> NullableList a b -> NullableList a NonEmpty
Occupied has the type a -> NullableList a b -> NullableList a NonEmpty , while Cons has the type a -> List a -> List a . It is important to note that NonEmpty is a type, not a term. Another example:
data Zero
data Succ n
data SizedList a t where
Alone :: SizedList a Zero
WithFriends :: a -> SizedList a n -> SizedList a (Succ n)
Inductive types in programming languages that have dependent types allow the return types of the constructors to depend on the values (not just the types) of the arguments.
Inductive Parity := Even | Odd.
Definition flipParity (x:Parity) : Parity :=
match x with
| Even => Odd
| Odd => Even
end.
Fixpoint getParity (x:nat) : Parity :=
match x with
| 0 => Even
| S n => flipParity (getParity n)
end.
(*
A ParityNatList (Some P) is a list in which each member
is a natural number with parity P.
*)
Inductive ParityNatList : option Parity -> Type :=
Nil : forall P, ParityNatList P
| Cons : forall (x:nat) (P:option Parity),
ParityNatList P -> ParityNatList
(match P, getParity x with
| Some Even, Even => Some Even
| Some Odd, Odd => Some Odd
| _, _ => None
end).
A side note: GHC has a mechanism for treating value constructors as type constructors . This is not the same as the dependent inductive types that Coq has, but it lessens the syntactic burden of GADTs somewhat, and it can lead to better error messages.
|
HuggingFaceH4/pmp-stack-exchangedata/cstheory.stackexchange.com
|
I believe we all know the famous equation $$F=ma$$
My attempt to prove this:
Newton's second law states that the rate of change of momentum of a body is directly proportional to the resultant force acting on the body, and this takes place in the same direction as the resultant force.
Hence, I can get $\frac{d\bar{p}}{dt}\varpropto\sum\bar{F}$. From here, we have $\sum\bar{F}=k\frac{d\bar{p}}{dt}$, where $k$ is a constant. We find that $k=1$ by comparing S.I. units. Now we have $$\sum\bar{F}=\frac{d\bar{p}}{dt}$$
By Newton's second law, $p=mv$, $$\sum\bar{F}=\frac{d(m\bar{v})}{dt}$$
By product rule, $$\sum\bar{F}=m\frac{d\bar{v}}{dt}+\bar{v}\frac{dm}{dt}$$
How do I get $F=ma$ from here? I know that the left term is $ma$, meaning that the left term $\bar{v}\frac{dm}{dt}$ is $0$. Why is this so?
|
When you us the equation $F=ma$ the assumption made but often not stated is that the mass is constant.
You question highlights the fact that when mass flows the application of Newton's second law must be done with care as illustrated in these examples .
|
You should be knowing that the derivative of a constant is 0.
That's why $\vec{v} \dfrac{dm}{dt} = 0$ as mass is constant.
Hence, it comes down to $\vec{F} = m \vec{a}$.
Your equation would hold true when the system has a variable mass, like in rocket motion.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
I suspect that the $N$ factorial in the partition function for N indistinguishable particles
$$ Z = \frac{ Z_0^N } {N!} $$
is an approximation. Please someone correct me if I am wrong and why or why not. Thanks.
A simple case :
each particle has two states with energy $0$ and $E$. The partition function for a single particle is
$$ Z_0 = 1 + e^{- \beta E} . $$
If there are only two particles, there is the total partition function
$$
Z = \frac{ Z_0^2 } {2}.
$$
But regarding the whole system consisting of these two particles, we can also write
$$
Z = 1 + e^{- \beta E} + e^{-2 \beta E} .
$$
And it is certain that
$$
\frac{ Z_0^2 } {2} \neq 1 + e^{- \beta E} + e^{-2 \beta E}
$$
|
In the figure above, consider the different configurations that are possible with 3 particles and 5 energy levels. Dividing by 3! gets the symmetry factor correct only for configurations of type 1 but is wrong for configurations of type 2 and 3. You can see this by explicitly writing out $Z$ and comparing with $z^3/3!$. That is why the OP's statement that $z^3/3!$ is an approximation is correct. I can add details to this, if necessary. (Notation: The lower-case $z$ is the single particle partition function)
To add to Josh's remark above, even for fermions where terms of type I are only allowed, the expansion of $z^N/N!$ contains terms of type 2 and 3 which are not present in $Z$. Nevertheless, the dominant contribution at large $N$ (as well as number of energy levels) is from terms of type 1. Hence my statement that it is a fairly good approximation holds.
|
Actually, it's exact. The flaw is "regarding the whole system consisting of these two particles, we can also write" $Z = 1 + e^{- \beta E} + e^{-2 \beta E}.$ Assuming the two particles are distinguishable, we have
$$Z=\sum_ig_ie^{-\beta E_i}=1+2e^{-\beta E}+e^{-2\beta E}=Z_0^2,$$
with the $2e^{-\beta E}$ since the state of energy $E$ is doubly-degenerate.
The additional factor of 1/2 in $Z=Z_0/2!$ accounts for indistinguishability.
EDIT: This seems to be wrong. See other answers.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Let $n,m\in\mathbb{N}$ . Is there a formula for the number of subgroups of index $n$ in $\mathbb{Z}^m$ ? Perhaps in terms of the divisors of $n$ ?
|
Yes. This is given by OEIS sequence A160870 . The number of subgroups of index $n$ in $\mathbf{Z}^m$ is there denoted $T(n,m)$ . There is a recursive formula in terms of the divisors of $n$ given at this page. The initial conditions are
$$ T(n,1) = 1 \quad \textrm{ for all } n\in \mathbb{N}$$
and recursively for $m > 1$ , we have either
$$
\quad T(n,m) = \sum_{d \mid n} \left(\frac{n}{d}\right)^{m-1} \cdot T(d, m-1)
$$
or equivalently,
$$
\quad T(n,m) = \sum_{d \mid n} d \cdot T(d, m-1)
$$
Note that we can solve this recurrence to get the "explicit" formula
$$ T(n,m) = \sum_{\substack{(d_0,d_1,\ldots,d_m)}} d_1 \cdots d_{m-1}$$
where the sum is over all sequences of integers $(d_0,d_1,\ldots,d_m)$ with $d_0=1$ , $d_m=n$ , and $d_i \mid d_{i+1}$ for all $i=0,\ldots,m-1$ .
For example, there are $T(4,5) = 651$ subgroups of index $4$ in $\mathbf{Z}^5$ .
|
The number $\sigma_d(N)$ of subgroups of index $N$ in $\mathbb Z^d$ is also given by the formula
(see Gruber, B. (1997), Alternative formulae for the number of sublattices.
Acta Cryst. A53, 807--808 and Y.M. Zou, Y.M. (2006),
Gaussian binomials and the number of sublattices,
Acta Cryst. A62, 409--410)
\begin{align}\label{formexactsigma}
\sigma_d(N)&=\prod_{p\vert N}\left(\begin{array}{cc}e_p+d-1\\d-1
\end{array}\right)_p
\end{align}
where
$\prod_{p\vert N}p^{e_p}=N$ is the factorization of $N$ into prime-powers
and where
$$\left(\begin{array}{cc}e_p+d-1\\d-1
\end{array}\right)_p=\prod_{j=1}^{d-1}\frac{p^{e_p+j}-1}{p^j-1}$$
is the evaluation of the $q$ -binomial
$$\left[\begin{array}{cc}e_p+d-1\\d-1
\end{array}\right]_q=\frac{[e_p+d-1]_q!}{[e_p]_q!\ [d-1]_q!}$$
(with $[k]_q!=\prod_{j=1}^k\frac{q^j-1}{q-1}$ ) at the prime-divisor $p$
of $N$ .
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
What is known about the set of well orderings of $\aleph_0$ in set theory without choice? I do not mean the set of countable well-order types, but the set of all subsets of $\aleph_0$ which (relative to a pairing function) code well orderings. And I would be interested in an answer in, say, ZF without choice. My actual concern is higher order arithmetic.
I would not be surprised if ZF proves there are continuum many. But I don't know.
At the opposite extreme, is it provable in ZF that there are not more well orderings of $\aleph_0$ than there are countable well-order types?
|
Colin, there are continuum many, as you suspect.
In fact, there are continuum many well-orderings of type $\omega$. The set of infinite binary sequences has size continuum. Given such a sequence $x=(x_0,x_1,\dots)$, let $i\in\{0,1\}$ be least such that $x_n=i$ infinitely often. Consider the enumeration of the naturals $a=(a_0,a_1,\dots)$ that begins with $a_0=i$. Having defined $a_n$, let $a_{n+1}$ be the first natural number not used so far, if $x_n=i$, and let $a_{n+1}$ be the second number not used so far, otherwise.
Since there are infinitely many $k$ such that $x_k=i$, the $a_n$ enumerate all naturals. Since from the sequence we can easily recover $x$, this assignment $x\mapsto a$ is injective. The ordering $a_0\lt a_1\lt a_2\lt\dots$ is a well-ordering of the naturals in type $\omega$.
It follows immediately that, for any countable infinite $\alpha$, there are continuum many well-orderings of the naturals in type $\alpha$. This is because one can simply fix a bijection between $\alpha$ and $\omega$, and use it to "transfer" the procedure just described.
|
Consider the tree of finite partial attempts to build a well-ordering, and notice that it has size continuum.
More rigorously, let:
$$T = \{ f : n \to \omega\ |\ n \in \omega, f \mbox{ injective } \}$$
ordered by extension. This is clearly an $\omega$ branching tree of height $\omega$, and its branches are precisely the injections $\omega \to \omega$. But we're interested in the set of well-orderings of $\omega$. Now, those injections which are bijections give us distinct well-orderings, but perhaps there are too few of them. What about the branches that aren't surjections? We can create distinct well-orderings out of them too: if a branch $b$ is not surjective and $X$ is the set of naturals missed by its range, consider the well-ordering obtained by taking $b$, then concatenating on to its end the numbers in $X$, ordered naturally.
So the branches of our tree are in bijection with a set of well-orderings of $\omega$, and there are continuum many branches, so there are continuum many well-orderings. Note that the set of well-orderings we get is not even the set of all well-orderings. In particular every well-ordering we get has order type $\leq \omega + \omega$.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Why do we interpret the results of logistic regression as probabilities? Passing the output of any regression procedure through a sigmoid function results in a probabilistic interpretation with respect to classification. Why is that so? Given that the output is between 0 and 1, is it enough to interpret the results as probabilities?
|
Why do we interpret the results of logistic regression as
probabilities?
Because the logistic regression model can be viewed as arising from a linear regression latent variable model, where the error term of this linear regression is assumed to follow the standard logistic distribution. See for example this post .
Given that the output is between 0 and 1, is it enough to interpret
the results as probabilities?
No. The "output" must come from a function that satisfies the properties of a distribution function in order for us to interpret it as probabilities. These properties are:
1) The function $F$ under consideration must be non-decreasing and right-continuous ("cadlag")
2) $\lim_{x\rightarrow -\infty}F(x) =0$
3) $\lim_{x\rightarrow \infty}F(x) =1$
The "sigmoid function" satisfies these properties.
|
A probability is bounded between 0 and 1 (inclusive), and a sigmoid curve is a convenient curve that can be forced to respect those bounds. It is not the only one, but the sigmoid curve has proven to be the most popular one. So, not all probability models have a sigmoid cure, though many do.
Moreover, not all models with a sigmoid curve model probabilities. For example, models that model a dependent variable that is a fraction also often use a sigmoid curve.
What makes logit, probit, and similar models model a probability is the fact that they model the conditional mean of an indicator variable, which is the conditional proportion of $1$s, which in turn is interpreted as the probability of having a $1$ on that indicator variable.
|
HuggingFaceH4/pmp-stack-exchangedata/stats.stackexchange.com
|
Occasionally I find myself in a situation where a naive, non-rigorous computation leads me to a divergent sum, like $\sum_{n=1}^\infty n$. In times like these, a standard approach is to guess the right answer by assuming that secretly my non-rigorous manipulations were really manipulating the Riemann zeta function $\zeta(s) = \sum_{n=1}^\infty n^{-s}$ and its cousins. Then it's reasonable to guess that the "correct" answer is, for example, $\sum_{n=1}^\infty n = \zeta(-1) = -\frac1{12}$. Thus the zeta function and its cousins are a valuable tool for other non-number-theoretic problem solving: it's always easier to rigorously prove that your guess is correct (or discover, in trying to prove it, that it's wrong) than it is to rigorously derive an answer from scratch.
I recently found myself wishing I could do something similar for the sum of the quantum integers. Recall that at quantum parameter $q = e^{i\hbar}$, quantum $n$ is the complex number $$[n]_q = \frac{q^n - q^{-n}}{q - q^{-1}} = q^{n-1} + q^{n-3} + \dots + q^{3-n} + q^{1-n}.$$ The point is that $[n]_1 = n$.
Question: Are there established methods to sum the divergent series $\sum_{n=1}^\infty [n]_q $ and its cousins? For example, is there some well-behaved function $\zeta_q(s)$ for which the series is naturally the $s=-1$ value?
Note that when $n$ is a root of unity, the series truncates, and it would be nice (but maybe too much too hope for) if the regularized series agreed with the truncated series at these values.
I should mention also that I consider the following answer tempting but inaccurate, as it definitely doesn't work at roots of unity, which I do care about:
$$ \sum_{n=1}^\infty [n]_q = \frac1{q-q^{-1}} \sum_{n=1}^\infty (q^n - q^{-n}) = \frac1{q-q^{-1}} \left( \sum_{n=1}^\infty q^n - \sum_{n=1}^\infty q^{-n}\right) = $$
$$ = \frac1{q-q^{-1}} \left( \frac{q}{1-q} - \frac{q^{-1}}{1-q^{-1}}\right) = \frac{q+1}{(q-q^{-1})(q-1)}$$
|
The paper by Cherednik On q-analogues of Riemann's zeta function ( arXiv:math/9804099 ) gives precisely the definition you're after:
$$
\zeta_q(s)=\sum\limits_{n=1}^\infty q^{sn}/[n]_q^s
$$
His paper also contains a brief discussion of the properties of this $q$ -zeta function.
On the other hand, the term quantum zeta function appears to have a somewhat different meaning, see e.g. the paper On the quantum zeta function by R.E. Crandall.
|
Here is another article dealing with similar functions:
q-analogue of Riemann’s ζ-function and q-Euler numbers. by Junya Satoh.
There are also many articles by Taekyun Kim on related functions.
One key point is that the value of the function $\zeta_q$ at negative integers is a fraction which has no limit when $q$ goes to $1$ . One can obtain a relation to the $q$ -Bernoulli numbers introduced by Carlitz in 1948, by taking a difference with the value of a modified $\zeta_q$ function.
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
Let A be an algebra (or dg algebra). Where can I find a proof of HH_*(A) = HH_*(Mod_A) and HH^*(A) = HH^*(Mod_A)? (And does this hold for any A?) Here Mod_A is, e.g., the category of left A-modules.
One reason why this is interesting/important/useful is because many categories which arise "in nature" are of the form Mod_A. For example, there is a theorem of Bondal and van den Bergh which states that derived categories of a large class of varieties (I forget their exact hypotheses) are equivalent to Mod_A for some A. Dyckerhoff also proved that categories of matrix factorizations are of this form. By mirror symmetry, Fukaya-type categories should be of this form as well...
Anyway, so to compute HH of such a category, it suffices to find this A and then compute HH(A). I think that it generally(?) should be easier to compute HH of an algebra than HH of a category. (Of course finding this A can be a very nontrivial task.)
|
Basically this follows from the fact that the derived category of bimodules over two algebras is equivalent to the (suitably defined) functor category between the derived category of modules of each algebra. Say, Toen's paper on derived Morita equivalence. Then, the identity functor is given by the algebra itself interpreted as a bimodule, so the Hochschild cohomology is $\mathrm{Ext}^i_{A-A}(A,A)$. You can compute this using the bar resolution and a quick calculation gives you the usual definition of Hochschild cohomology.
|
I guess it follows from results in [Lowen, Wendy; Van den Bergh, Michel. Hochschild cohomology of abelian categories and ringed spaces. Adv. Math. 198 (2005), no. 1, 172--221. MR2183254 (2007d:18017)]
For algebras $A$, at least, it follows more simply from the fact that the categories $\mathrm{Mod}(A)$ and $A$ are Morita equivalent. That must have been proved by Mitchell or Freyd...
|
HuggingFaceH4/pmp-stack-exchangedata/mathoverflow.net
|
As roadway consists of a width of road on which a vehicle is not restricted by any physical barriers or separation to move laterally source , what is a similar term which encompasses the width of the entire road's infrastructure - barriers, drainage ditches, sound screens, generally the entirety of terrain allocated for construction as a road is built?
|
A number of online sources use the term road reserve to encompass the road, associated infrastructure, such as drainage and spare land for additional lanes in the future. The width of this would be the road reserve width.
|
In the US this is commonly referred to as the Right-of-Way (property line to property line on either side of the roadway). Or, if the roadway right-of-way is wider than needed for construction, it may be referred to as the limits of construction.
|
HuggingFaceH4/pmp-stack-exchangedata/engineering.stackexchange.com
|
How to prove that $\ln(Z(J))$ generates only connected Feynman diagrams ? I can't find the proof of this statement, and have only met its demonstrations for case of 2- and 4-point.
|
Assume that the generating functional is given by a sum of all possible diagrams, i.e.
$$Z(J)=\sum_{n_i} D_{n_i}.$$
Furthermore, assume that each diagram D is given by a product of connected diagrams $C_i$, i.e. a diagram D can be disconnected. We will write this as
$$D_{n_i}=\prod_i\frac{1}{n_i!}C_i^{n_i},$$
where dividing by $n_i!$ amounts for a symmetry factor coming from exchanges of propagators and vertices between different diagrams. Combining this with our first expression, we get
$$Z(J)=\sum_{n_i}\prod_i\frac{1}{n_i!}C_i^{n_i}.$$
With some manipulation, this can be shown to be equivalent to
$$Z(J)=\exp\left(\sum_i C_i\right).$$
Taking the logarithm on both sides gives you the desired expression.
|
An intuitive interpretation from Timo Weigand's lecture notes :
Suppose $iW[J]$ contains all connected diagrams, then all possible connected and disconnected diagrams can be showed as products of $iW[J]$:
$$ \frac{Z[J]}{Z[0]} = 1 + iW[J] + \frac{1}{2!} {(iW[J])}^2 + \frac{1}{3!} {(iW[J])}^3 + ... = e^{iW[J]} $$
So
$$ iW[J] = ln \frac{Z[J]}{Z[0]} $$
This interpretation is just the same as Frederic's answer, but expressed in reverse order.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
How does one find the distances of celestial objects in the night sky, such as the Moon and the stars, from the Earth using a snapshot of information (including, say, the intensity and wavelength of light received from the various observed objects and their relative positions) observed in the night sky at a single time instant? Most methods (especially those taught in orbital mechanics classes) are inspired by Gauss's method for determining orbits (and hence, distances of the observed objects from the earth), thus requiring observations at several time instants, or equivalently, position and velocity information at a single time instant.
|
For very distant objects, their distance from us can be estimated in one snapshot by measuring the redshift in their spectra, knowing the so-called Hubble Constant . This method can be refined somewhat if the type of the object (star, quasar, galaxy, etc.) is known and its spectrum can be accurately gathered.
For a much closer object whose diameter is known, its distance can be estimated trigonometrically by measuring its angular size with a telescope, in one "snapshot".
If two cameras are allowed instead of one, then two photos of the same object in the sky shot at the same instant from different locations on earth will yield the distance via a parallax measurement , for objects within our local spot in space.
|
For near planets, we can use radar ranging. Beyond that, we use trigonometric parallax. Hipparcos measured parallaxes out to about 4.5 light years. Gaia is measuring parallaxes to about 25 thousand light years.
Very long base line interferometry can detect distances of radio sources using an array of radio telescopes, by measuring the time difference for a signal to arrive at different telescopes. This goes out to about 13 thousand light years, I think.
Beyond that we measure luminosity distances. If we know, from studies on near stars, that a particular type of star has a precise absolute luminosity, then we can estimate distance from its apparent luminosity.
Further out we use cosmological redshift.
This is the bones of the cosmological distance ladder. More detail is in Structures of the Sky , and in many other sources.
|
HuggingFaceH4/pmp-stack-exchangedata/physics.stackexchange.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.