Talk:Polynomial ring/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Use of N perhaps confusing

The variable N is used in at least two ways, allowing for some confusion. In the first sense, N is the set over which exponents of variables are drawn (and this is used right next to n which is the number of variables, rather than saying using k). Later N is used for k[x]/(x^2).

I believe something along these lines made Oleg Alexandrov and I interpret the article differently, as he reverted what was probably a valid correction (though for my money, the sentence remained too confusing to be "correct"). At any rate, to explain my insertion of the material in a new way, the set N is the set from which i can appear in the expression (and k is a positive integer 1 to n). The reversion seemed to indicate that you were considering n instead.

The only reason to need 0 in N is that the definition of multiplication requires that i+j work out nicely, and so one needs to have an x^0 to act as 1. For instance y=x^0*y^1 needs 0.

The section on the "alternate definition" itself is a bit confusing. Would it be better instead to take this opportunity to define more general exponents? Ring theorists love taking exponents to be subsets of rational numbers and the like, and as long as the exponents are taken from a commutative monoid N (like the nonnegative integers), things work about as expected, and it is just the free monoid ring on N. If N is a group, it is a group ring. If N is the p-adic integers, then you get rings associated to endomorphisms of p-groups.

Basically, the current section seems like easy mathematics made hard, so it might as well be used as a gateway to moderately complex mathematics (still made hard, hehe). JackSchmidt (talk) 04:53, 18 February 2008 (UTC)

Unicodify

I would like to convert a number of the inline math tags to simple text. I say "unicodify" when in fact the characters are just ASCII. Images do not resize automatically, and make the articles hard for visually impaired readers to understand. I would leave all sections with \sum or \rightarrow as those do not translate well to plain text. I believe that would mean there would be 5 images on the page, and the rest would be resizable text. JackSchmidt (talk) 00:20, 24 February 2008 (UTC)

multiplication sign

Oleg reverted my edit.

The section says: "the set of all polynomials with coefficients in the ring R, together with the addition + and the multiplication mentioned above, forms itself a ring, the polynomial ring over R, which is denoted by R[X]". So the article itself requires the dot for multiplication.

Secondly the formula for addition does not tell how to add polynomials of different degrees.

Thirdly the use of both latin and greek letters for indexes is unneeded.

Oleg, I repeat, discuss FIRST and revert after the discussion if that is the result. You are not the God of WP.

Bo Jacoby (talk) 10:56, 21 February 2008 (UTC).

I am sorry that I had to do a wholesale revert. And I am surely not the god of WP, and I hope other people will comment here too. However, Bo, your efforts to place dots where they don't belong has been discussed at length at talk:integral and talk:derivative, and you refuse to listen. So let me try one more time: please use established notation. A polynomial is denoted rather than , although people use as that can't be written otherwise. Please find established references to the contrary. Otherwise, your edits are in violation of Wikipedia policies of following established practices. Thanks. Oleg Alexandrov (talk) 15:46, 21 February 2008 (UTC)
The points made here are mostly valid, but the implementation was suboptimal. I made some minor changes which I think addressed the points while following standard conventions of wikipedia and mathematics. In particular, I only used the cdot to denote the multiplication of polynomials, and this is often used for clarity, though it could also be omitted. I think it is better to use it here, simply because one is defining cdot, and it is easier to define it if it is explicitly involved in the definition. The use of greek letters struck me as odd, and the standard indices i,j were still open. The problem with degree was handled strangely in Bo's edit (simply removing the bounds on the sums). I just stuck a sentence after the definition to explicitly say that the article does not disallow zero summands. I mean x^2+1 has a1=0, and it can be quite convenient to allow 0x^3+x^2+1 as well. I also removed the "one can check that", which was not terrible, but still was the sort of "order the reader around" language that is discouraged by WP:MSM. JackSchmidt (talk) 16:02, 21 February 2008 (UTC)
Jack, thank you for taking the time to study carefully the issue and for your edit. I do agree that the multiplication sign needs to be used the first time, when the multiplication is defined. Oleg Alexandrov (talk) 03:52, 22 February 2008 (UTC)

Thank you gentlemen. The edits made improved the article.

  1. The addition formula would in my opinion be clarified by inserting (formally redundant) parentheses in order to stress the analogy to the multiplication formula
  2. The opposition against the explicite multiplication sign in talk:integral was because some editors did not consider the operation between f(x) and dx in the integral to be a multiplication (!). In the case here there is no argument against that is supposed to mean , so I still find it correct to include the multiplication sign, especially when the article explicitely states that the multiplication of the ring is called "". But using juxtaposition for multiplication in R and "" for multiplication in R[X] is perhaps an acceptabel compromise.
  3. In the addition formula the summation index of all three polynomials involved are all called i. There is no reason why the indexes in the multiplication formula are called j and k.
  4. The expression is unnecessarily confusing. Use
  5. The problem of different degrees is solved simply by remarking that the series has but a finite number of nonzero terms and so it is actually a finite sum.

Bo Jacoby (talk) 12:27, 22 February 2008 (UTC).

I agree with 1, I don't agree with 2, 3, 4, the current notation looks better to me. It would not make sense at all to use your proposal 5, formal series are a much more complex thing than polynomials, I'd rather not invoke it. Oleg Alexandrov (talk) 15:24, 22 February 2008 (UTC)
1 sounds good. It will increase the symmetry and make it clearer which notion of addition is being defined. The summation signs themselves are basically just a formal way of writing down the sequence, but we want to avoid forcing the reader to realize that too early.
2 is hard for me to understand. I think the current amount of cdots is good if not optimal. Do you feel strongly about the use of cdot? I think objectively it cannot be that important (on some screens the dot will not even render, on year old printouts the dot will already have rubbed off), but the prolonged conflict about it cannot be good for the encyclopedia. I think the current level is a good compromise, and in fact is superior to both the previous version with none and the previous version with many, so I hope we can agree the current is at least "good enough to agree on".
3 could go either way here, but let me explain why I think the current way is better. I will assume you meant "no reason to use i and j on the left hand side", since the right hand side is a double sum so needs at least two indices. There is a reason to use a_i and b_j on the left side, but it is merely expository. In the addition formula's right hand side, the coefficients are a_i and b_i, so we do the same on the left hand side. Similarly, the coefficients appearing on the right hand side of the multiplication definition are a_i and b_j, so we do the same on the left, even though we could have used a_i and b_i on the left. Note this symmetry is part of my opinion on 4.
4 could also go either way in this article, but let me explain why I think the current way is better. In fact, I would have agreed with you a year ago, but I've seen lots of summations used by my combinatorialist friends. Using extra indices and describing the geometric set from which the indices are chosen under the summation sign is much more readable than what is in effect parameterising the geometric set with an arbitrary coordinate system. Now {(i,j):i+j=k} is just a line, so there is not a huge problem here (hence in this article it does not matter as much), but even {(i,j,m):i+j+m=k} begins to get harder to read. Once you have some slopes other than 1, and 4 or 5 dimensions, the single index convention no longer works at all (for people with bad eyesight, for people who receive photocopies of the article, etc.). Basically it is a question of typography, and it is easier to read the conditions on i,j under the summation sign than it is to read them smooshed together in the a_i b_{k-i}. A different reason to disagree with 4 is related to my "N" comment above. If we do want to do monoids for N, then we cannot subtract, only restrict to indices that add up correctly. However, no one has said it was a good idea, so perhaps it is irrelevant to this article. Convolution is more naturally phrased in terms of subtraction anyways, especially when it is integration with respect to Haar measure on a group.
5 makes the presentation seem too abstract, I think. I would say the problem of different degrees is already solved. Completely removing the bounds on the sums would make them harder for younger students to read. Your particular previous implementation of this required defining a_i = b_i = 0 for i<0, so there will need to be an extra sentence anyways. However, 5 might make a good contribution to the formal definition below. You could explain how "\sum_i a_i X^i" is a convenient expression for the sequence a:i \mapsto a_i, and how interpretting the sum formally and applying the distributive law formally produces the cauchy/convolution definition of multiplication, etc. In the early part though I think it will be too barren.
In summary then, 1 seems like a good idea, 2 is hopefully addressed, and 3,4,5 potentially degrade the exposition. JackSchmidt (talk) 17:43, 22 February 2008 (UTC)

Gentlemen:

  1. Everyone agrees.
  2. Agreed. No, I do not feel strongly about the use of the dot except where the missing dot leads to misunderstanding or confusion. "" may be a function value or a product. When it is a product the dot helps: "". One example of omitting the dot is in the integration, and I was amused to learn that other editors do not consider "" to be a formal product. So the omission of the dot has really confused people. The trained mathematician is used to omitting the dot, and math books are read and written by trained mathematicians, but WP readers, not being trained mathematicians, are confused by the editor first explaining that the multiplication sign is "", and afterwords not using "" for multiplication. Omitting the dot is polite to the writer and rude to the reader. I prefer clarity to sloppy conventions in an encyclopedia. Also, programming languages require explicite multiplication signs, so we've got to get used to it anyway.
  3. I merely suggest that is consistently the exponent of like in:
  4. does strangely not tell if goes from through or from minus infinity to infinity. Luckily it makes no difference, but that doesn't go without saying.
  5. The simpler version is: , noting that the coefficients and are nonzero only for a finite number of nonnegative integer values of I prefer explicite multiplication signs, , but I understand that you guys don't. That's OK.

The explanation: "One can think of the ring R[X] as arising from R by adding one new element X to R and only requiring that X commute with all elements of R. In order for R[X] to form a ring, all sums of powers of X have to be included as well" should be moved upwards to become the second sentence. It is more understandable than the formulas. And it should be written in one way rather than in three the ways: , , and R[X].

Bo Jacoby (talk) 19:35, 23 February 2008 (UTC).

  1. Done.
  2. All agreed, no action need here?
  3. I am almost convinced. It is either make a_i b_j on both sides, or make X^i all three times; both seem reasonable. Either the current, or your suggested version \left(\sum_{i=0}^n a_iX^i\right) \cdot \left(\sum_{i=0}^m b_iX^i\right) =\sum_{i=0}^{m+n}\left(\sum_{j+k = i}a_j b_k\right)X^i.
  4. I agree that the current notation is deficient, and basically in the same way as your long-ago suggested notation. I still prefer the current version, but I am open to suggestions.
  5. I still think this would be better to tone down the abstract of the section "Formal definition". I'll write the monoid section, since it will give us a place to expand the article, rather than worrying so long about indices.
  6. (the location of the sentence) I like the explanatory sentence as the second sentence (Bo's version). (Done Bo Jacoby (talk) 15:24, 25 February 2008 (UTC))
  7. I definitely agree, a single notation should be used for just R[X] in running text. I'll switch it to R[X] when it is in running text (small images slow down the page load, and make it jittery for everyone, and I cannot actually read the latex images on wikipedia, too teeny). JackSchmidt (talk) 00:05, 24 February 2008 (UTC)

Thanks. Remaining issues: 3, 4, 5. The subsection on generalized exponents says: "the formulas for addition and multiplication are the familiar: and where the latter sum is taken over all i, j in N that sum to n". I like that there are no explicite limits to the summations, but is should be explained that the apparently infinite sums are actually finite sums because only a finite number of terms are nonzero. Bo Jacoby (talk) 15:24, 25 February 2008 (UTC).

noncommuting variables

The example YX−XY = 1 is important in quantum mechanics. X is considered multiplication by an independent variable x: X = (f → (x→ x·f(x))), and Y is considered differentiation with respect to this variable x: Y = (f → df/dx). Then the Leibniz product rule gives (YX)f = d(x·f)/dx = x·(df/dx)+(dx/dx)·f = (XY+1)f implying that YX−XY = 1.

The exponential function t→eat is an eigenfunction to the differential operator d/dt with eigenvalue a because d(eat)/dt = a·eiωt. The frequency ν = a/2πi is an eigenvalue to the operator (1/2πi)d/dt because (1/2πi)d(e2πiνt)/dt=ν·e2πiνt. The energy E=hν (where h is the Planck constant) is an eigenvalue to the operator (h/2πi)d/dt because (h/2πi)d(e2πiEt/h)/dt=E·e2πiEt/h. So in quantum mechanics the energy E and the time t are related by the commutation relationship Et−tE=h/2πi.

This should be explained somewhere in WP. Perhaps there should be a link from here to commutator and to canonical commutation relation.

Bo Jacoby (talk) 05:25, 25 February 2008 (UTC).

I'm not sure why that was not included already. I had actually written out the little derivation of the relation, explaining how it acts on polynomials, etc. I decided it was better just to site a reference since the calculations are not hard, but it helps to see them written down once, and then to write them down yourself. However, wikipedia is not a textbook and this is not an article on weyl algebras, so I figured it was better to stick with the cite. Somehow the final result was deleted too. At any rate, I added it in. JackSchmidt (talk) 16:26, 25 February 2008 (UTC)

The polynomial ring R[X].

Quote 1: "only requiring that X commute with all elements of R"

Question 1: Is is really required that X commutes with all elements of the ring in order that the polynomial ring is defined? I think not. Consider (Z[X])[Y]. Do you require that YX=XY ?

Quote 2: "If R is commutative, then R[X] is an algebra over R".

Question 2: Is this information appropriate here? I am more confused than enlightened.

Bo Jacoby (talk) 10:50, 26 February 2008 (UTC).

  1. Yes, it is required. In (Z[X])[Y] it is required that YX=XY.
  2. Yes, it is one of the main ways of defining polynomial rings over commutative rings. R[{x: x in X}] is the free unital, commutative, associative R-algebra with R-algebra basis of cardinality |X|, and is unique up to R-algebra isomorphism, the construction is natural in R and X, etc. In plain English, R[X] is the most general ring in which it makes sense to evaluate its elements in R-algebras.
JackSchmidt (talk) 14:51, 26 February 2008 (UTC)
Thank you. Normally the adjective restricts the meaning of the noun: every black dog is a dog, and so on. So a reader might get confused to learn that a 'non-commutative polynomial ring' is not a 'polynomial ring'. He might prefer the term 'commutative polynomial ring' to avoid misunderstandings. Bo Jacoby (talk) 10:30, 27 February 2008 (UTC).

Sum and product formula revert

Oleg wrote: (rv the new sum and product formula. Bringing in infinite sums and formal series is just poor judgement, why invoke something complex to explain something simple? No comment on other changes)

replacing

where only a finite number of terms are nonzero in these formally infinite sums.

with

Answer: The limits n and m are undefined and unexplained. For example

In the formula with limits, the left hand side is

and the right hand side is

So the formula with limits is incorrect. The left hand side does not involve but the right hand side does.

In the formula without limits the left hand side is

and the right hand side is

The formula without limits is correct.

Note also that the expression

which is used in both formulas, is also formally an infinite series, unlike

which, however, assumes that is defined for negative indexes.

The sum of two terms can be written a+b or a+0+b+0+0. Any number of zeroes can be included or excluded from a sum without changing the value. An infinite sum with only a finite number of nonzero terms is a handy expression for a finite sum with an unknown number of terms. The fact that the theory of sums of an infinite number of nonzero terms - a series - is 'something complex' does not imply that the theory of sums of a finite number of nonzero terms is complex at all.

Summarizing: The formulas without limits are simpler. The limits complicates matters and actually made the formulas incorrect. It is not sufficient that a formula is found in books (and taught in American universities for years by Oleg), it must also be correct in order to qualify for WP.

Bo Jacoby (talk) 14:01, 26 February 2008 (UTC).

"It is not sufficient that a formula is found in books, ... it must also be correct in order to qualify for WP" directly violates wikipedia policies. WP:V is "an official English Wikipedia policy", and it states "The threshold for inclusion in Wikipedia is verifiability, not truth." (emphasis theirs).
It is not clear to me that some of your recent edits are being constructive, and they are going against the consensus process on this page. We are having a discussion here to work out what would be best on the page.
I am happy to work towards consensus, but please work out the experiments on the talk page, not on the article, as there have been entirely too many reverts on the article page. Please see WP:REVERT for guidelines on reverting.
Note that Oleg and I have both explicitly said the notation without limits is a poor choice for the section you have added it to. We have said this more than once, and you have added it more than once, against consensus. However, I have not been trying to "stalemate" the discussion, but have suggested your sums without limits might be more appropriate in the "Formal definition" section. While Oleg has not explicitly agreed, his argument that the notation is too technical would need to be revised since that section is already quite technical. At any rate, it would be a constructive edit to restore simple formulas to the simple section, and try your limitless expressions in the formal definition. JackSchmidt (talk) 14:51, 26 February 2008 (UTC)
After a very thorough explanation I undid Oleg's illegitimitely revert, which was without explanation in the talk page. The "verifiability, not truth" does not mean that an incorrect formula should be included, but that nonverifiable formulas should be excluded. If you dislike the limitless notation then you should avoid it in all cases. By now, one of the four sums in the multiplication formula is without limits and that makes the formula incorrect. I too am happy to work towards consensus, and I look forward to see your suggested formula having limits and being correct. Bo Jacoby (talk) 15:14, 26 February 2008 (UTC).
I raised the matter at Wikipedia talk:WikiProject Mathematics. Oleg Alexandrov (talk) 15:55, 26 February 2008 (UTC)

I prefer the limit-free version, since it more easily generalizes to the case of a free associative algebra on a set, where it may be inconvenient to be too strict in specifying the allowed index sets. It is much easier to say that only finitely many terms are non-zero than to try (needlessly) to pin down which terms are nonzero, particularly in the general case. I should also note that both versions are common, but the limit-free version is the one advanced by van der Waerden as well as by Bourbaki. Silly rabbit (talk) 16:08, 26 February 2008 (UTC)

However, we should focus on the needs of the reader who is trying to learn these things, not on the needs of the expert. Also note that the index-free formula is already in the generalizations section. Oleg Alexandrov (talk) 18:33, 26 February 2008 (UTC)
I would like to add that I find the current edit unacceptably pedantic, and would prefer to see the indexed version restored. I do not agree with Arthur Rubin assertion that the sum of an infinite number of zero terms is potentially problematic, although as an analyst myself I can certainly appreciate the queasiness he feels in having an unqualified summation. Silly rabbit (talk) 18:56, 26 February 2008 (UTC)
I'd prefer having the indexed form, myself. I just think that, if this is the definition of addition and multiplication, it needs to be formally correct. — Arthur Rubin | (talk) 22:11, 26 February 2008 (UTC)

I'm going to go ahead and revert to the version by Oleg. Of the three possibilities so far, I find it to be the least controversial. I'd like to continue the discussion about how polynomials ought to be defined, but with the lesser of three evils in the article. Silly rabbit (talk) 22:25, 26 February 2008 (UTC)

Well, I must say that the formulas
and
are very intimidating to a new reader. It is much simpler to put the original simple formulas with a finite sum and explain a bit how the indeces are handled than having this in. Polynomial multiplication is a simple thing, just using the distributive law, why make things complicated? Oleg Alexandrov (talk) 04:33, 27 February 2008 (UTC)
We agree that simple things should not be made complicated. The formulas are simpler without explicite limits. Let's remove the scary words 'formally infinite' and write:
In these sums only a finite number of terms are nonzero.
Bo Jacoby (talk) 10:47, 27 February 2008 (UTC).
I prefer the notation using the limits. The problem pointed out by Bo Jacoby that certain undefined coefficients showing up in the product formula is a minor one and it is rather formal. It can and should be resolved by saying in words below the formula that coefficients a_{-1} etc. are interpreted to be zero. Using elements of logical notation under the summation is IMO obfuscating the story. Striving for simplicity is a good thing, but I think introducing infinity here and then restricting back to finitely many coefficients does not help a beginner. Jakob.scholbach (talk) 12:28, 27 February 2008 (UTC)

field and ring

Arcfrk made improvements. Thank you. The ring K[X] is defined now when K is a field, but R[X] is used when R is a ring. Bo Jacoby (talk) 11:51, 3 March 2008 (UTC).

I noticed that too. By the way, I don't see much value in starting with a field, rather than a ring, as it was before. It is better in my view to use the field assumption only when actually stating specific properties for which a field is needed. Oleg Alexandrov (talk) 03:57, 4 March 2008 (UTC)
I do, and for both historical and pedagogical reasons. By the way, I've just checked Lang's Algebra (3rd ed), Chapter IV, where Basic properties for polynomials in one variable starts with the ring of polynomials A[X] for arbitrary commutative ring A, and it turned out that every single statement save the very first one, from Theorem 1.2 to Proposition 1.12, actually assumes that the ring of coefficients is a field! (Exception? You'll be amused: Theorem 1.1 (preceded by "We start with the Euclidean algorithm") deals with polynomial division f = gq + r in the special case when the leading coefficient of the divisor g is a unit in A. Needless to say, the Euclidean algorithm does not follow, and so Theorem 1.1 is not mentioned again in this section.) Of course, it will warm my heart as algebraist to define an algebraic variety as a separated scheme of finite type over the spectrum of a strictly henselian ring, but would that be a wise course for starting a wikipedia article "Algebraic variety"? Arcfrk (talk) 05:16, 4 March 2008 (UTC)
Please note though, you define the polynomial ring only over fields, but then later the text deals with polynomials over a ring, without that having been defined. Pedagogically, that is not very good. Oleg Alexandrov (talk) 05:31, 4 March 2008 (UTC)
The articles is undergoing a transition, but not everything at once, please! Would it be more pedagogically sound if I just commented out the leftovers from the old version that have not been yet been incorporated? Arcfrk (talk) 05:47, 4 March 2008 (UTC)

Some more picky points (or, about writing p(X) and about non-commutative rings)

I was just sucked into reading this article. It actually looks pretty good, but I have some questions.

  • The definition says a polynomial is an expression of the form . Being pedantic one could object that it's not the equation but its right hand side that is a polynomial (so one should write only that RHS and say that such an expression is usually denoted as or some such). But I was actually more interested inthe left hand side. So if is a polynomial, then what is p? Possible answers: (1) you are not allowed to write p without parentheses following it, so the question makes no sense (2) it a a macro that gobbles a symbol as argument and produces the RHS with the argument inserted into in place of X (a slight precision of the previous answer; I think many people unconsciously think of it like this, but it is not very mathematical) (3) it denotes the polynomial function associated to the polynomial (really? then what is its domain, and why is X allowed?) (4) p is the same as , since the stuff in parentheses is to be susbtituted for X in the polynomial p, and substituting X for X changes nothing (don't laugh, some authors seriously maintain this point of view) (5) p is the same as , but the X is a warning to the reader that there are Xes hiding in the expression (6) (your answer here). In any case people do write , and I'm not objecting to using at this point in the discussion. But also very few people really keep up the effort of writing all polynomials as , and the current state of this article proves the point (I just replaced a p by p(X) in the line immediately following the definition, but in "Properties" the (X)es are mostly dropped). Except that in "further properties" we find P(x) twice, standing for the polynomial function associated to P evaluated at x. Which brings me to a next two points.
  • When P(x) is twice used in "Further properties" to denote the value after substitution of x for X (first as a function of x with P fixed, then as a function of P with x fixed), wouldn't it be good to define the process of substition explicitly rather than implicitly? And mention that this requires the base ring to be commutative? Also I think it would be nice to add that the base ring R is a fairly important example of a unital associative R-algebra.
  • In fact nothing in the article explicitly takes into account that the base ring could be non-commutative; shouldn't it be stated in the introduction that this article is not about polynomials over non-commutative rings (a slightly slippery subject for those used to the commutative case). I'm not saying nothing could apply to the non-commutative case, but it is clearly not what is being thought of. Many parts implicitly exclude the non-commutative case by requiring something stronger for the base ring (field, domain,…). It is true, Noetherian rings do not have to be commutative, and properly formulated the Hilbert basis theorem applies in the non-commutative setting, but is that really so important here? Note that even the part about non-commutative polynomials assumes the base ring is commutative, and that its elements commute with the indeterminates.
  • In the introductory sentence, shouldn't it be somewhere said that the set of polynomials itself is (made into) a ring, if only to justify the name "polynomial ring"?

Marc van Leeuwen (talk) 15:27, 3 March 2008 (UTC)

(Noncomm rings) I agree that it may be a good idea to begin only with commutative rings. In fact, I think it would be wise to discuss Q[x], R[x], k[x], Z[x], C[x,y], or so, before letting the coefficient ring be general. I think many of the nice categorical and module-theoretic properties of R -> R[x] fail when R is not commutative, so it might be wise to restrict to R commutative for the majority of the article.
To clarify though, I believe in the generalization section, all requirements on R are stated explicitly and locally, but I could double check if you think there is a problem. In Lam's cited text, the chapter on division rings shows how one can reasonably extend the idea of "substitution" to certain non-commutative rings, and still have interesting mathematics. In general R[x] may be nearly unrelated to R or to equations over R when R is a non-commutative ring.
In response specifically to "Note that even the part about non-commutative polynomials assumes the base ring is commutative, and that its elements commute with the indeterminates." This is phrased in a misleading way. The section defines non-commutative polynomials quite generally, and then remarks that they are free algebras when R is commutative. Note that even R is not an R-algebra when R is not commutative, so the assumption is pretty natural. JackSchmidt (talk) 16:19, 3 March 2008 (UTC)
OK, thanks. See my edit that tries to clarify this (I hope you agree). By the way I think even the very notion of an R-algebra supposes that R is commutative (at leat that is what its article says). Marc van Leeuwen (talk) 05:59, 4 March 2008 (UTC)
The new edits are nice. I shortened the parenthetical remark (if something is worth saying, it is worth saying without parentheses). Wait, that's not fair.
Yes, the ring R need not be commutative, but an R-Algebra A is not only a module over R, but also over R/[R,R], the largest commutative quotient of R, so any non-commutative aspect of R is lost immediately. For instance an algebra over a Weyl k-algebra is 0, so one tends to lose generality rather than gain it by allowing R to be non-commutative. JackSchmidt (talk) 14:48, 4 March 2008 (UTC)
Further, in the section "skew polynomial rings", two important ring constructions are given where the indeterminates do not commute with the coefficients, but rather act as derivations or ring endomorphisms. In each case, it is important that some sort of "PBW" basis exist to give some hope that the polynomial rings are noetherian, but the method is not so trivial as "indeterminates commute with coefficients". JackSchmidt (talk) 16:19, 3 March 2008 (UTC)
In spite of my edit mentioned a few lines up, I think it would be best this article were dedicated to commutative polynomial rings (which I guess is what most people think of naturally), stating so clearly, with a separate article about noncommutative issues (both for coefficients and variables), making clear what can be retained and what changes with respect to the commutative situtaion. This is probably more clear than to have the reader having to search what are the precise assumptions at each point. Marc van Leeuwen (talk) 05:59, 4 March 2008 (UTC)
I disagree that non-commutative concerns should be removed. I agree that commutative rings should be emphasized earlier. Please see WP:NPOV for the general wikipedia policy on inclusion of multiple points of view.
The reader need only search to the section heading, "non-commutative polynomial rings", to be aware of the new scope. All the assumptions are stated within the paragraph they are used. If we cannot expect the reader to read the whole sentence, then we cannot expect them to understand the article. The generalizations section includes both non-finitely-generated k[x] algebras and non-commutative ring extensions of k[x]. I think the entire research community in non-commutative rings thinks of polynomial rings as being non-commutative, so I do not think non-commutative concerns are given undue weight (five paragraphs, one of which serves double duty to the commutative algebraists as well, the other four describing three distinct and physically important ring constructions). I do think the earlier sections need some expansion, but I think that is being taken care of by Arcfrk and Marc van Leeuwen. JackSchmidt (talk) 14:48, 4 March 2008 (UTC)
On the one hand, I agree with Marc that it would be better to have a separate article dedicated to noncommutative polynomials, and perhaps another one dealing with the ring extension RR[X] in the context of noncommutative rings, where the theory is quite different. I cannot speak on behalf of the entire research community in noncommutative rings, but it appears quite unusual to assume by default that a "polynomial ring" (the subject of this article) is noncommutative. That is not how the term is commonly used. On the other hand, there is nothing wrong with briefly mentioning noncommutative theory under "generalizations", especially, if this is done in summary style. Arcfrk (talk) 18:11, 4 March 2008 (UTC)
To be clear, I am not asserting that anything in the generalizations is the standard primary meaning of polynomial ring, but I am asserting that the non-commutative ring theory research community does not assume that R[X] is commutative, because they do not assume that R is commutative. For example Köthe's conjecture can be phrased as several equivalent statements about how ideals composed of nilpotent elements behave under polynomial rings; the conjecture is trivially true for commutative rings or noetherian rings, but has been an area of active research since the 30s. Polynomial extension preserves nice properties such as prime, semiprime, noetherian, being Ore, but for instance it is an open question whether R[x] being Ore implies R is Ore. Polynomial extension can be very complex: it does not preserve the property of Goldie in general, but of course does for commutative rings and noetherian rings. Generally speaking, it is a standard question in ring theory, for a given property P, does R have P iff R[X] has P? JackSchmidt (talk) 18:54, 4 March 2008 (UTC)

Other generalizations

First off, thanks to Arcfrk and others for lots of improvements to the article. I noticed the new link to Laurent polynomial, k[t,1/t], which is technically covered under the monoid definition, but not otherwise mentioned. It seems this is a pretty important part of polynomial rings, but so are function fields (which currently only is a disambig; I mean the field of fractions of a polynomial ring over a field). I was thinking of adding these to the generalizations section somewhere, but I worry about making that section too large. One reason I think polynomial rings are so important is that quite a lot of algebra is in some sense a generalization of polynomial rings!

Another wonderful edit by Arcfrk linked to Ore extension. I suggest that someone (I'll do it) trim down the skew+differential section, and instead put more detail into the Ore extension article. In other words, one combined paragraph on skew and differential rings. The only really large section left then would be the monoid ring section, and I think it is mostly big because it does a double or triple duty of formalizing the earlier not-generalized cases.

It might be wise to discuss which generalizations would then use the "room left". Laurent polynomials, function fields, rings of polynomials on varieties, affine schemes, polynomial representations of algebraic groups, ... clearly not all should be included directly, but it might be possible to organize it so that most top-level generalizations are mentioned with a sentence or two and a wikilink.

One might be able to condense free algebras and ore extensions into the same section, but I worry that this must be done carefully. "Non-commutative polynomial ring" links to free algebras, and I think this is the predominant primary meaning, but already there has been confusion about whether a polynomial ring with coefficients in a non-commutative ring is a "non-commutative polynomial ring" etc. Sticking all non-commutative generalisations under one heading might be inviting a comedy of confusion.

If the ideas sound basically plausible, here is my preliminary suggestion:

  • Monoid ring (since it makes it easy to discuss others in a vaguely uniform way)
  • Localizations and their completions (Power series, laurent polynomial, rational functions)
  • Noncomm (free algebra, ore extension)
  • Geometry (regular local ring, affine scheme)

I'm not sure if/where algebraic groups and their polynomial representations belong. Maybe there should be a blurb on symmetric functions? Is there a nice category they could fit under that would have one or two other interesting topics? Maybe some belong more under a "uses" than a "generalizations"? JackSchmidt (talk) 02:11, 6 March 2008 (UTC)

Thanks for your kind assessment, I am glad to be of service. I think that you worry too much about condensing. At the moment, all sections in "Generalizations" look about the right length. The section on noncommutative polynomials might be a tad too dry: at least it can mention the notation K<X,Y> and perhaps display the expansion of a noncommutative polynomial on a separate line. I suggest not going into too much details with fields and regular local rings. They can be mentioned casually in the text or linked under "See also", but making separate sections may be too much.
Of course, "Ore extension" requires a lot of work and if anything from this article can be of use, go ahead and put it in there. But I think that short description and links to the Weyl algebra should stay, as this is one of the most fruitful noncommutative analogues of the polynomial rings. Arcfrk (talk) 02:34, 6 March 2008 (UTC)

Definition of polynomials

Something's very wrong here. The definition says a polynomial is a kind of expression, but lists both formal symbols and elements of a field as being parts of the expression, and an expression can only have other expressions as parts, but the elements of a field are usually not expressions.Adam.a.a.golding (talk) 04:28, 21 December 2009 (UTC)

Coming back to this after two days away, I think that the definition still needs work. The problem is that people will think of polynomials as representing functions: such as X^2+X representing the function f from R to R defined by f(x)=x^2+x. However, thinking of R[X] as a set of functions from R to R is a mistake for several reasons. The main one is that several different-looking polynomials can be the same function, for example X^2+X and 0 are always equal if R is GF(2), but they are different elements of R[X]. We can also note that in the case of finite R every function from R to R can be written as a polynomial. I think the right way to think of R[X] is the one given here: http://planetmath.org/encyclopedia/PolynomialRing.html . That is, the polynomials are just convenient ways to write down sequences. That definition also makes it clear that R[X] and R[Y] are not just isomorphic but identical. --Zero 12:06, 21 Jan 2005 (UTC)

You are certainly right. I alluded to this problem above. I will soon get to explaining what exactly a polynomial is. Oleg Alexandrov | talk 16:26, 21 Jan 2005 (UTC)

I was ready to work more on this article, but when I checked out the polynomial page, I found an excellent section there about polynomials in abstract algebra, see Polynomial#Abstract algebra. So what should we do? Shorten the thing over there and move most stuff here? Do nothing? Make a copy of that stuff? The only thing missing on that page is the formal construction of the polynomial ring, by means of sequences of finte length, as Zero says above. but I am not even sure that is necessary. Help! Oleg Alexandrov | talk 01:25, 23 Jan 2005 (UTC)

I hadn't noticed that. Actually there is another thing missing from Polynomial#Abstract algebra: the multivariate case. I think the simplest formal way to define R[X,Y] is that it means (R[X])[Y]; whether that is the easiest definition to understand, I'm not sure. As to what to do with this article, I think we should fix our own definitions to be consistent, then add a "for more information see" link from polynomial to here. I don't think we should reduce the material at polynomial. --Zero 02:27, 23 Jan 2005 (UTC)
Well, fixing the defintions to be consistent probably means copying that thing over, and then work from there. Other ideas? By the way, what is written in Polynomial#Abstract algebra is indeed neat, I could not have written it so well myself. Oleg Alexandrov | talk 02:30, 23 Jan 2005 (UTC)
Go for it. --Zero 06:08, 23 Jan 2005 (UTC)

visual/basic

Polynomial rings sounds very visual...if this is so, where the are pictures to help the novice start to grasp what all those equations are talking about? Please at least start out with some more visual/basic concepts. I am not mathematically illiterate, but I am having problems with this particular article.97.126.133.44 (talk) 21:57, 29 January 2010 (UTC)

Polynomials in one and several variables

I have restored short sections on polynomials in one and several variables: it sets up the proper context for the discussion of the polynomial rings in one and several variables. This material is not treated in the article Polynomial, which focuses mostly on elementary and analytic aspects of the theory, and in any case, it is needed to introduce and explain the notation. Arcfrk (talk) 22:34, 5 July 2010 (UTC)

Larger problems with this article

Without getting into the notation debate, let me point out that this article has more serious flaws, and with direct bearing on novices who may try to read it.

  • Definition of a polynomial: If this article aims at someone not well versed in algebra, I suggest by starting with polynomials over a (commutative) field, and only in a later section mentioning that the coefficients may be taken to be a more general ring. The case of K[X] is by far the most useful in applications and it has quite distinguishing features: it's a domain, and moreover, a principal ideal domain, PID (in fact, even Euclidean domain), with all the usual corollaries for the multiplicative structure and the classification of modules and ideals and the corresponding homological properties. The motivation for treating powers of X in a formal way can also be smoothened out, by starting with fields Q and R and then doing things in complete generality. The only advantage for using arbitrary ring R of coefficients seems to be that the Hilbert's finiteness theorem can be stated in one line, but it seems to be far outweighed by the disadvantages of working in unnecessary generality and constantly worrying about extra hypothesis needed to state even the most basic properties of polynomial rings (such as being a domain, or factoriality).
  • The polynomial ring in several variables: the one-sentence description does not do justice to the subject. It does not even acknowledge the inherent symmetry between the variables ( (R[X])[Y] = (R[Y])[X], let alone point out that polynomial rings in several variables are fundamentally more complex objects: think Serre conjecture, for example. "Alternative definition" in the same section is weirdly out of place. What is its function, actually? Almost no one (in algebra) thinks of the polynomial ring n variables as the semigroup ring of the free commutative cancellative semigroup on n generators. But this is, in effect, the point of view that this subsection (along with a later subsection on "monoid rings" under Generalizations) is trying to promote.
  • Properties: this is a fairly dense and somewhat random list, which mixes fundamental properties with unnecessary abstract nonsense (thank you, Lang!)
  • Some uses of polynomial rings: ditto, and a lot more eclectic at that. Wanting explanations, the same items may also be dubbed "uses of factorization by ideal" or pretty much anything else with algebraic flavor. Needs a lot of work.
  • To summarize: beyond the definition of the ring structure, which, in my view, is given in unnecessary generality, there is hardly any connected English text explaining what polynomial rings are and how they are used.

Arcfrk (talk) 04:11, 1 March 2008 (UTC)

You're very welcome to work on the article, as long as the topic is kept accessible and more complex issues are treated further down the article. Oleg Alexandrov (talk) 06:39, 1 March 2008 (UTC)
I agree the article should focus on the importance of the polynomial rings, not their definitions. If working with Q[x], R[x], and C[x,y] lets us get to the good parts quicker, then I say leave the formal definition to be "a more complex issue treated further down the article".
One purpose of the generalizations section was to indicate how the fundamentally important idea of polynomial rings has informed ring theory, and so subtly address the problem of "what are these polynomial rings for?". Another subtle purpose was to have the monoid definition give a simple, formal definition which generalized easily to Union k[x^(1/n!)], k<x,y>, kG, etc. Rather than "use complex ideas to explains simple things", I hope I include the "complex ideas" at the end, giving one paragraph summaries of "complex things", rather than multi-paragraph explanations of "simple things".
This mitigates any real need to include formal definition sections, and hopefully encourages a more grounded approach in the earlier sections of the article. I like the idea of beginning with Q[x] and R[x], then perhaps (Z/pZ)[x] to emphasize the difference between polynomial functions and formal polynomial rings. I would also be fine with maintaining such explicit examples for a several variables section, perhaps using C[x,y] as an example.
Note that my text was not meant as an endorsement of the monoid approach, merely recording the fact that it existed. I personally tend to lean in the "they are polynomials, they form a ring, what's there to say?" camp as far as the definition goes. I would very much like the article to concentrate on their importance, not their formal definition. The generalizations section is meant to address their importance, by describing their "children". JackSchmidt (talk) 06:41, 1 March 2008 (UTC)
Just to make it clear: I like the "Generalization" section and think that it is by far the best part of this article. But without the foundation to build upon, it's hard to expect non-experts getting much out of it, or even reading that far. Also I have no problem with treating monoid algebras there, especially, in summary style, it was the convolution thingy in "the polynomial rings in several variables" that got me all worked up (and by extension, my axe fell upon the innocent head — brrr!) Arcfrk (talk) 08:47, 1 March 2008 (UTC)
  • What about polynomials over Euclidean Domains( eg over Z ), PIDs etc? —Preceding unsigned comment added by 171.67.87.40 (talk) 02:11, 12 October 2010 (UTC)

Article rating

What is this rating or survey box at the end of the article? I couldn't find any template code or link to an article explaining that thing? -- 194.24.158.1 (talk) 20:57, 8 April 2011 (UTC)

It is javascript triggering on the inclusion in Category:Article Feedback Pilot. The category describes it a little and links to the project. JackSchmidt (talk) 03:07, 9 April 2011 (UTC)

Again on field and ring

Perhaps there are historical and pedagogical reasons for starting with a field rather than with a general arbitrary ring, but, at present, a section on "Polynomials in one variable over a commutative ring".is missing, before the section "The polynomial ring in several variables". And I see no reason for dealing exclusively with coefficients in a field, in the case of several variables.--Paolo Lipparini (talk) 16:52, 12 January 2013 (UTC)

The phrase at the end of The polynomial ring K[X] "More generally, the field K can be replaced by any commutative ring R, giving rise to the polynomial ring over R , which is denoted R[X]" is supposed to suggest taking the coefficients in R and otherwise defining it the exact same way. ᛭ LokiClock (talk) 20:17, 3 September 2013 (UTC)

"variable"

I noticed that the article uses the term "variable" to refer to the Xi elements. These clearly aren't variables in any typical sense of the word, but is it standard to call them that anyway? I've seen them called "indeterminates" elsewhere, but I don't know how widespread that is. Part of the reason I ask is that I'd like to expand Rational_expression#Abstract_algebra a bit but I don't know what term I should use. Rckrone (talk) 18:26, 6 August 2009 (UTC)

I changed the word 'variable' to 'indeterminate' in the lead and got some pushback (see User talk:RDBury#Indeterminant). The distinction is subtle but important. I've tried to expand on the difference in the article Indeterminate (variable), but it's important here to make the distinction between the polynomial ring and the ring of polynomial functions. Over R and C they are isomorphic so it does no harm to confuse the two and the result is that term 'polynomial' is somewhat loosely defined for most people. Over finite fields they are different though, for example the x2=x in the ring of polynomial functions over GF(2) but X2X in the polynomial ring over GF(2). Formally, it's better to think of the polynomial ring as the set of infinite tuples (a0, a1, ... ), where almost all of the ai are 0, written with a notation that makes addition and multiplication intuitive. Usually the word 'variable' means a letter that represents a unknown or undetermined value in the field, but the letter X in the polynomial ring is really just a place holder so it should be called something else. There is a homomorphism k[X]→kk defined by "evaluating" the polynomial, but it's not one to one for finite fields. It's important to be looking at polynomials as formal expressions rather than as functions because otherwise there is no meaningful definition for degree and without that the Division algorithm breaks down and you don't get to prove neat things like the existence of primitive roots in finite fields. Herstien uses the word 'indeterminant' for X and Lang gets caught up in Langian formalism so he doesn't get around to calling it anything (that I could find). Perhaps there is other terminology current in the literature, but though it may be unfamiliar to many readers, it's the one I've seen used most often. (I remember when I first came across the word 'indeterminant' and thinking it must be some vague kind of determinant.) --RDBury (talk) 04:08, 31 August 2013 (UTC)
Correction, actually Herstein uses 'indeterminate' not 'indeterminant', though I believe (this may be just my dyslexia talking) I've seen it both ways. I'll try to use 'indeterminate' in the future. --RDBury (talk) 05:02, 31 August 2013 (UTC)
I agree that the distinction between a polynomial and the function that it defines is very important. But it cannot be done through a subtle (and wrong, IMO) distinction between "variable" and "indeterminate". Firstly, one may remark that the indeterminate in a polynomial is not a placeholder, but a constant in the polynomial ring. On the other hand, the x in the functional notation f(x) is a placeholder. This is clear in the notation where x stands for anything. In fact, in modern mathematics, a variable is a symbol that may represent any mathematical object, including itself. The interpretation of a variable as indeterminate, constant, parameter, placeholder, etc. is strongly context dependent and may not be mathematically defined. For example, may be viewed as an univariate polynomial in which x is an indeterminate and a, b, c are constants representing real numbers. It may also be viewed as a polynomial in four indeterminates. If it is viewed as an element of a univariate polynomial ring, a, b, c are now variables. Thus in the same expression, a, b, c are either constants, variables or indeterminates, depending on what is to be done with the expression. On the other hand, can not be viewed as a function without abuse of language, the correct notation for the quadratic function being D.Lazard (talk) 10:21, 31 August 2013 (UTC)
I think at this point we mostly agree. I like the presentation given by Hall since it defines the ring as sets of tuples to start with, then introduces the polynomial-like notation for it. This may be too textbooky to use in WP, but I'd like to take a shot at Frankensteining that plus Herstein and what is already in the article. For the lead, how about rephrasing to avoid both variable and indeterminate? Now that I'm thinking about it neither term describes the concept succinctly, especially for readers unfamiliar with the subject of the article. --RDBury (talk) 15:40, 31 August 2013 (UTC)
If you rewrite this article, please be care that there are several ways to define polynomials. IMO, a main definition has to be chosen and a section is needed to present the other definitions and to show that they are equivalent to the main one. The choice of the main definition needs some discussion. The main definitions that appear in the literature are the following ones:
  • Infinite sequences of coefficients that are eventually zero (or finite sequences with nonzero last element, the zero polynomial being the empty sequence). Although very classical, these definitions have two drawbacks: firstly they are counterintuitive, as the polynomials are not defined as they are commonly written. Also they work only for univariate polynomials, and a completely different definition is needed for multivariate polynomials (usually as univariate polynomials with polynomial coefficients, a definition which needs to prove that it does not depends on the order of the indeterminates).
  • Expressions that are constructed from the coefficients and the indeterminates by addition subtraction and multiplication. In fact, these expressions are "polynomial expressions", and the polynomials are the cosets under the equivalence relation "P may be rewritten to Q by using distributivity, associativity and commutativity. I do not think that it is a good way of presenting polynomials.
  • Formal sums of products of a (monic) monomial by a nonzero coefficient (the zero polynomial is the empty sum), in which all monomials are different. Here we have to identify two such sums that differ only by the ordering of the terms. IMO, this is the best definition, that follows the usual intuition and unifies the univariate and the multivariate cases.
D.Lazard (talk) 18:12, 31 August 2013 (UTC)
I'm thinking encyclopedic coverage would incorporate all of the above starting the most intuitive. In any case, I'll proceed carefully. --RDBury (talk) 19:19, 31 August 2013 (UTC)
Conceptually, a variable over R takes values in R. One first has to generalize one's idea of a variable over R to obtain the concept of an indeterminate. This is a common way of introducing the complex numbers, treating i as a black-box entity whose value squared is -1, hence as a variable. But insights and their induced natural generalizations such as these should be documented, not assumed. ᛭ LokiClock (talk) 21:52, 3 September 2013 (UTC)
As far as I know, "variable over R" is an original research concept. D.Lazard (talk) 09:20, 4 September 2013 (UTC)
D.Lazard I agree with most of what you say, but not with this "in modern mathematics, a variable is a symbol that may represent any mathematical object, including itself". A symbol may represent other mathematical objects, or it may not do so, in which case it just is itself, without any representation going on (I find the idea of a symbol doing nothing but representing itself rather circular). But in the latter case the symbol is not a variable (nor in fact is it so when it designates one fixed other mathematical object, as is the case of the constant π). The symbol i in the context of complex numbers may be represent nothing other than itself, but it is not a variable. I wouldn't call it an indeterminate either because that term suggests there are no special relations attached to the symbol. In the context of polynomial rings, I find the term "indeterminate" more appropriate (and certainly "constant" would be confusing), even if "variable" has a lot of historic credit. Marc van Leeuwen (talk) 15:25, 5 September 2013 (UTC)
D. Lazard, that's my word choice, not the concept. What I'm calling a variable here is a symbol which can take any value within a certain set, which, the definition you would get in an elementary school class is it's a symbol which can become any number - what I just called a variable over R. Regardless of what counts as a variable or a function, this is what people will assume indeterminates are unless alerted to this pitfall, and polynomials will be functions whose arguments are the same kind of variable - ones which take values in R. ᛭ LokiClock (talk) 09:05, 7 September 2013 (UTC)

Evaluating

In this section it reads: implies . Why not simply implying ?Nijdam (talk) 11:22, 26 September 2013 (UTC)

You are right. However this section was full of mathematical nonsenses. Therefore, I have completely rewritten it, instead of making this correction. D.Lazard (talk) 13:08, 26 September 2013 (UTC)
The section was written to:
1a) To contrast polynomials with their values in general, describing the phenomena that differentiate them.
1b) In particular to derive the contradiction between the definition of polynomials as elements of this ring and the definition of polynomials as functions from the ground ring to itself, so that it is clear that the algebraic definition does not produce all the same consequences and identifications, and that there is an error in assuming the same reasoning for when two polynomial functions are equal does not apply to polynomials in the ring.
2a) Describe how the polynomial ring, when considered through abstract algebra, contains the information necessary to produce values in the ground ring from each constant of the ground ring, and hence how the polynomial ring subsumes the concept of polynomial functions.
2b) By consequence, document how polynomials in the ring are reasoned about.

It's stated that to illustrate the phenomenon that a homomorphism sending to 0 will map the value of any expression in the variable to the value of the expression with the variable substituted for the constant (which makes little Bézout's theorem automatic). This satisfies purpose (2a). This is done with some abstract algebraic formalism to show that one would analyze a case like not as a logical contradiction , but as producing a quotient of the ground ring. This satisfies purpose (2b). Congruence relations are chosen instead of ideals because it makes the phenomenon being reproduced transparently a familiar one.

The loose treatment of congruence relations and non-derivation of the equivalence with ideals is somewhat didactic and elaborate, and could be substituted for a matter-of-fact illustration of a consequence of declaring that for a congruence relation. The axioms of a congruence on a ring might still be included to motivate the conclusions, but would not detriment to the article beyond the expense of verbiage.

The new formalism accomplishes purposes (1b, 2a), but it limits the view of bigger picture by merely stating that substitution of a variable for a constant defines a homomorphism to the ground ring. If instead the pattern for constructing a homomorphism is used to construct these homomorphisms that must be documented for special reasons, it will be clear not only how they constitute homomorphisms, but how other homomorphisms may be constructed. This includes (finite-degree) field extensions, which are sometimes introduced as abuses of evaluation or described like indeterminates with extra relations. Demonstrating that rings behave in the manner needed to make cases like generalized evaluation rigorous fulfills purpose (1a) satisfactorily. It's one of the ways the discussion of evaluation can document the polynomial ring's role in mathematics in general. ᛭ LokiClock (talk) 16:06, 2 October 2013 (UTC)

I see though that the revised section allows the values to be contained in any ring containing K, which is pretty clever, but doesn't explain how one is supposed to set elements of to elements of if the field is not thought of as a set of elements, but as an object, nor how is acquiring the ability to reference elements of , so it's ultimately confusing under the circumstances. ᛭ LokiClock (talk) 17:07, 2 October 2013 (UTC)
What is "a field that is not thought as a set of element"? Every field is together a set of elements and a mathematical object (every set is a mathematical object). What does mean the "ability [for a mathematical object] to reference elements of a set? D.Lazard (talk) 17:47, 2 October 2013 (UTC)
As in, there is only one field of order , but could appear as two different sets, so when possible it's logical to read references to K as references to an object, not to a set. But I'm being pedantic. To reference an element of R, well, you're equating two elements from across different sets, and it's not declared how the polynomial ring can or does do anything with R, there are no axioms being given for how they interact. Your edit does address this, but by passing to a word or expression instead of working with the polynomial directly, which to me seems to avoid ring theory rather than express the evaluation naturally. In general you're suggesting the natural inclusion followed by the usual evaluation , but what I originally described was meant to suggest taking quotients by arbitrary maximal ideals as a natural extension of the concept of evaluating at an element of the ring. ᛭ LokiClock (talk) 00:53, 3 October 2013 (UTC)

Serious shortcomings

I think some people have already made this point, but it seems rather silly to start off by defining polynomials over a field, and then say, "oh by the way, we can replace K with a commutative ring R" (there's no reason to require R to be commutative anyway).

There's absolutely no simplification achieved in the description by taking coefficients in a field. It makes a lot more sense to just start off defining polynomials over a general ring, and then saying, "hey, if R is actually a field, we get some extra nice properties."

Moreover, it would be nice to have a summary of ring properties that are passed on to polynomial rings. Like if R is a field, then R[X] is a euclidean domain, etc.

— Preceding unsigned comment added by 87.112.137.64 (talkcontribs) 15:35, 5 January 2016‎

Please, sign your posts in talk pages with four tildes (~~~~).
From a logical point of view, you are right. However, Wikipedia is an Encyclopedia, and, as such, must be as accessible as possible for as many readers as possible (see WP:TECHNICAL for details). As most readers interested in polynomials are primarily interested with polynomials over a field, it is thus convenient to insist on this case. This does not simplifies the presentation but makes it easier to understand for most readers.
The properties that pass from rings to polynomial rings are listed (possibly with omissions) in section "Properties of the ring extension R ⊂ R[X]". The property "if R is a field, then R[X] is a euclidean domain" does not belong to this section, but is described in details in section "Factorization in K[X]". D.Lazard (talk) 18:04, 5 January 2016 (UTC)
Anyone who knows what a field is should already know what a ring is (even more so since this article is about a particular kind of ring). So starting here doesn't decrease accessibility one iota. Why would you assume that a visitor to this page is more interested in polynomials over fields instead of more general rings? Students of abstract algebra invariably learn the definition of a ring before that of a field. And for less advanced readers, there's already a page for polynomial at a less advanced level. Starting off with special cases can be fine when it simplifies the presentation, but here it does just the opposite: it makes it more convoluted. 87.112.137.64 (talk) 22:04, 5 January 2016 (UTC)

A search for "polynomial multiplication" redirects to this page, but it does show the distributive process like at Distributive property. — Preceding unsigned comment added by 99.127.236.195 (talk) 22:54, 27 February 2017 (UTC)

I have edited Polynomial multiplication for redirecting to Polynomial#Arithmetic, which is clearly a better target. D.Lazard (talk) 09:38, 28 February 2017 (UTC)

Infinitely many variables

I tried to clean up some of the text of this section. Since the article's original definition doesn't allow for infinitely many variables, this really is a generalization (even if it's minor), in contrast to the earlier revision. It also seemed a bit dismissive that nothing really new happens in this case. But poking around a little, it seems that this is only sometimes true. For example, for a field K, the ring is still a UFD, but if R is Noetherian, it's no longer true that is Noetherian (in contrast to finitely many variables). So it seems like some more could be said here, but I don't really have any good sources, so I left the tag for someone who knows this stuff better. --Deacon Vorbis (talk) 14:53, 24 July 2017 (UTC)

Composition of polynomials

 – Deacon Vorbis (carbon • videos) 14:57, 30 October 2019 (UTC)

For any polynomial the map is an algebra endomorphism of as -algebra, and conversely, any algebra endomorphism of is of the form for exactly one polynomial . pma 08:08, 7 November 2019 (UTC)

"Polynomial composition" is an operation on polynomials that is defined in Polynomial § Arithmetic. It deserve to be mentioned here as a special case of polynomial evaluation (substitution of the indeterminate of a polynomial for another polynomial). The inverse operation polynomial decomposition deserves also to be linked here and in Polynomial. D.Lazard (talk) 11:15, 7 November 2019 (UTC)

field and ring redux

The notation S[X] is defined as having coefficients from a field S. It is only revealed at the end of a long section that the definition equally applies to any ring, not just a field. This is so far away from the definition that I think someone looking for the meaning of the notation Z[X] may well have given up before reaching the end of the section. When I read the definition, I just thought it was too restrictive, not noticing the later sentence hidden out of sight, on my laptop two screens down, on a mobile phone even eight screens down. Shouldn't this be noted earlier, before going into all the detail, none of which uses any property of S other than ring properties?  --Lambiam 18:01, 15 May 2020 (UTC)

I support Lambiam. - Nomen4Omen (talk) 18:38, 15 May 2020 (UTC)
I missed exactly who was making what edits, but I do think that the initial basic statements and definitions should simply have the coefficients be from a ring. And then after that, one can say, "Oh, and if the coefficient ring is actually a field, then certain nice properties are guaranteed", etc. I'm being pretty vague about it, but this has always struck me as a strange way to introduce the article. –Deacon Vorbis (carbon • videos) 18:47, 15 May 2020 (UTC)
Sorry for having reverted Lambiam: It was late and I have exchanged (in my mind) the old and the new version. I have restored the mention of rings at the beginning of the section, but leaving the emphasis on fields because they may be more familiar for many readers. I have also done some changes for making clear when K must be a field and when this is not required. D.Lazard (talk) 20:34, 15 May 2020 (UTC)
I agree with Deacon Vorbis. Reading again this article, it appears that it must be completely restructured. It must beging with the important and most common case of one variable over a field, with emphasis on its resemblance with the ring of the integers (lacking presently, except for the paragraph of the lead that I have just added). Then a section on univariate polynomials over a ring should be present for showing that most aspects of the multivariate case reduce to this case, by recurrence on number of variables. Finally, a third part is needed, about the multivariate case over a field, in relation with algebraic geometry. By the way, I do not understand why having a whole section on Hilbert's Nullstellensatz, and nothing about Hilbert's basis theorem and Bézout's theorem (among other fundamental theorems specific to multivariate polynomials over field). D.Lazard (talk) 09:30, 16 May 2020 (UTC)
To be clear, this isn't what I was advocating. We should start with the general notion where the coefficients come from a ring. There is no reason to talk about fields right away. Those only need be mentioned when discussing what properties polynomial rings have (like UFD when coefficients are from a field, etc). But it's unusual to start with coefficients from a field, both logically and pedagogically. –Deacon Vorbis (carbon • videos) 13:39, 16 May 2020 (UTC)
I agree with your last sentence, when polynomials are defined in a course of "abstract algebra". But WP is not a textbook, and many people use polynomials without really knowing of general rings (This is the case, for example, for many mecanicians). So we have to choose a compromise between generality, and easy of access for most people. Also, if we start from the general case, we should include the multivariate case from the beginning. IMO, this would make the article too technical. So, the solution seems to state clearly the scope of a section in the section headings. This is what I have started. D.Lazard (talk) 15:29, 16 May 2020 (UTC)