[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

June 21, 2010

Exact Squares

Posted by Mike Shulman

One of the slogans of (,1)(\infty,1)-category theory is that most theorems of ordinary 1-category theory generalize to the (,1)(\infty,1)-world if you’re careful enough with how you phrase them. One might hope that that the proofs of these theorems could likewise be generalized, and this is sometimes the case, but many of the more complex tools of (,1)(\infty,1)-category theory don’t have much of a counterpart in the 1-categorical world, such as the study of various types of anodyne morphisms of simplicial sets.

Recently I’ve been talking, reading, and writing about derivators, which I view as a way of working with (,1)(\infty,1)-categories that is less dependent on technical details of any particular definition of (,1)(\infty,1)-categories, is more closely connected to familiar techniques of 1-category theory, and may hopefully be easier, in some absolute sense, than working directly with (,1)(\infty,1)-categories. However, generalizing an argument from 1-categories to derivators does usually still require rephrasing it a bit, because derivators are based around an approach to 1-categorical limits and colimits that isn’t that well known in many circles – the theory of exact squares of categories. So in this post I want to talk about the calculus of exact squares, mainly from a purely 1-categorical viewpoint, but I’ll add some remarks at the end about how easy it is to generalize the arguments to derivators.

In order to get used to the calculus of exact squares, the first thing we have to change is to start thinking more in terms of Kan extensions rather than limits and colimits. For instance, it’s usual to think of a pullback as the limit of a diagram of shape BCA B \to C \leftarrow A (a cospan) i.e. an object PP equipped with a natural transformation of cospans: P P P B C A \array{ P & \to & P & \leftarrow & P\\ \downarrow & & \downarrow & & \downarrow \\ B & \to & C & \leftarrow & A } which is universal. From this point of view, a pullback square is just a square P A B C \array { P & \to & A \\ \downarrow & & \downarrow \\ B & \to & C} that we put together from some of the arrows drawn above; thus the presence of that square is “hidden” inside the natural transformation of cospans.

A different approach, however, is to give primacy to this square, rather than to the object PP. In this approach, we start instead by defining a pullback square of a cospan BCAB \to C \leftarrow A to be a commutative square P A B C \array { P & \to & A' \\ \downarrow & & \downarrow \\ B' & \to & C'} equipped with a natural transformation of cospans B C A B C A \array{ B' & \to & C' & \leftarrow & A'\\ \downarrow & & \downarrow & & \downarrow \\ B & \to & C & \leftarrow & A } which is universal. In other words, we ask for a (pointwise) right Kan extension along the inclusion of the “walking cospan” into the “walking commutative square.”

In order to be sure that this gives the same notion of pullback that we’re used to, we need to know two things:

  1. The underlying cospan BCA B' \to C' \leftarrow A' of the pullback square should be the same as the cospan we started with, and
  2. The resulting universal property of the object PP and its projections to AA and BB should be the same.

Both of these facts are naturally expressed in the language of exact squares.

What is an exact square? Suppose we have a square in CatCat inhabited by a natural transformation: X u Y v α g Z f W\array{ X & \overset{u}{\to} & Y \\ ^v\downarrow & ^\alpha \swArrow & \downarrow^g \\ Z & \underset{f}{\to} & W } Then if MM is some ambient category, we write f *:M WM Zf^\ast\colon M^W \to M^Z for restriction along ff, and f !:M ZM Wf_! \colon M^Z \to M^W for left Kan extension along ff, and f *:M ZM Wf_\ast\colon M^Z \to M^W for right Kan extension along ff, insofar as these extensions exist. The given transformation α\alpha induces a transformation u *g *v *f *u^\ast g^\ast \to v^\ast f^\ast, which has mates v !u *v !u *g *g !v !v *f *g !f *g ! v_! u^\ast \to v_! u^\ast g^\ast g_! \to v_! v^\ast f^\ast g_! \to f^\ast g_! and g *f *u *u *g *f *u *v *f *f *u *v *. g^\ast f_\ast \to u_\ast u^\ast g^\ast f_\ast \to u_\ast v^\ast f^\ast f_\ast \to u_\ast v^\ast. We say that the square α\alpha is exact if these two transformations v !u *f *g !v_! u^\ast \to f^\ast g_! and g *f *u *v *g^\ast f_\ast \to u_\ast v^\ast are isomorphisms for any MM, insofar as the relevant pointwise Kan extensions exist. (In fact, by the general calculus of mates, if one is an isomorphism, so must the other be.)

For example, suppose that cspncspn denotes the walking cospan \to \leftarrow and \square denotes the walking commutative square, with u:cspnu\colon cspn \to \square the inclusion. Then our “thing number 1” says exactly that the square cspn cspn u cspn u \array{ cspn & \to & cspn \\ \downarrow && \downarrow^u \\ cspn & \underset{u}{\to} & \square } (which is inhabited by the identity 2-cell) is exact: right Kan extending along uu and then restricting back along it is canonically isomorphic to the identity. And if we recall that limits in the ordinary sense can be identified with right Kan extensions along the unique functor to the terminal category 1, then our “thing number 2” says exactly that the square cspn 1 p cspn u \array{ cspn & \to & 1 \\ \downarrow & \swArrow & \downarrow^p \\ cspn & \underset{u}{\to} & \square } is exact, where p:1p\colon 1\to \square picks out the upper-left vertex – i.e. right Kan extending a cospan along uu and then restricting to the upper-left vertex is canonically isomorphic to the limit of that cospan. (Exactness of this square is precisely a special case of pointwiseness of the Kan extension, if you know what that means.)

The reason it’s convenient to express things like this in terms of exactness of squares is that we can also give an explicit combinatorial description of when a given square is exact, allowing us to verify easily that many Kan extensions can be “computed” in terms of others. Here’s the characterization: given a square X u Y v α g Z f W\array{ X & \overset{u}{\to} & Y \\ ^v\downarrow & ^\alpha \swArrow & \downarrow^g \\ Z & \underset{f}{\to} & W } as above, fix some objects zZz\in Z and yYy\in Y and form the category (y/X/z)(y/X/z) whose objects are triples (x,yu(x),v(x)z)(x,y\to u(x), v(x)\to z) and whose morphisms are morphisms in XX that make the two evident triangles commute. There is a functor (y/X/z)W(g(y),f(z))(y/X/z) \to W(g(y),f(z)) (considering the latter as a discrete category) which takes (x,yu(x),v(x)z)(x,y\to u(x), v(x)\to z) to the composite g(y)g(u(x))f(v(x))f(z)g(y) \to g(u(x)) \to f(v(x)) \to f(z). The theorem is that the square α\alpha is exact if and only if this functor induces a bijection of connected components. By discreteness of W(g(y),f(z))W(g(y),f(z)), this is equivalent to saying that for each morphism ξ:g(y)f(z)\xi\colon g(y) \to f(z) in WW, the subcategory of (y/X/z)(y/X/z) consisting of the triples mapping to ξ\xi is connected.

Two different proofs of this characterization of exact squares can be found on the nLab. I leave it to you to verify that the two squares described above satisfy this condition.

Let’s do one more sample computation, which is maybe a bit less trivial. Consider the standard lemma on pullback squares which says that given a diagram a b c d e f \array{a & \to & b & \to & c \\ \downarrow & & \downarrow & & \downarrow \\ d & \to & e & \to & f } in which the right-hand square bcefbcef is a pullback, then the left-hand square abdeabde is a pullback if and only if the outer rectangle acdfacdf is a pullback. Let’s prove this using exact squares. I’ll write cdefcdef for the lower-right L-shaped subcategory of the above diagram shape, and so on, and I’ll leave to you the verification that all the squares that arise are in fact exact.

First of all, since the squares cef bcef cdef abcdefandcdf acdf cdef abcdef\array{cef & \overset{}{\to} & bcef\\ \downarrow && \downarrow\\ cdef & \underset{}{\to} & abcdef} \qquad and \qquad \array{cdf & \overset{}{\to} & acdf\\ \downarrow && \downarrow\\ cdef& \underset{}{\to} & abcdef} are exact, if we start from a cdefcdef-diagram and right Kan extend it to a full abcdefabcdef-diagram, then the right-hand square and outer rectangle must be pullback squares. Moreover, by composition of adjoints, right Kan extension from cdefcdef to abcdefabcdef is equivalent to first extending to bcdefbcdef and then to abcdefabcdef, and since the square bde abde bcdef abcdef\array{bde & \overset{}{\to} & abde\\ \downarrow && \downarrow\\ bcdef & \underset{}{\to} & abcdef} is exact, the left-hand square in such an extension must also be a pullback.

Now if we start with an abcdefabcdef-diagram, say FF, we can restrict it to a cdefcdef-diagram and then right Kan-extend it to a new abcdefabcdef-diagram. If u:cdefabcdefu\colon cdef \to abcdef is the inclusion, then this results in u *u *Fu_\ast u^\ast F, and we have a canonical natural transformation η:Fu *u *F\eta \colon F \to u_\ast u^\ast F (the unit of the adjunction u *u *u^\ast \dashv u_\ast). Since the square cdef cdef cdef abcdef\array{cdef & \overset{}{\to} & cdef \\ \downarrow && \downarrow\\ cdef & \underset{}{\to} & abcdef} is exact, the counit u *u *GGu^\ast u_\ast G \to G is an isomorphism for any GG, and in particular for G=u *FG=u^\ast F, from which it follows by the triangle identities that u *Fu *u *u *Fu^\ast F \to u^\ast u_\ast u^\ast F is also an isomorphism — i.e. the components of η:Fu *u *F\eta\colon F \to u_\ast u^\ast F at cc, dd, ee, and ff are isomorphisms. Now if the right-hand square of FF is a pullback, then the restrictions of FF and u *u *Fu_\ast u^\ast F to bcefbcef are both pullback squares; hence since the cefcef-components of η\eta are isomorphisms, so is the bb-component. And if the left-hand square of FF is a pullback, then we can play the same game with abdeabde to show that the aa-component of η\eta is an isomorphism, while if the outer rectangle is a pullback, we can play it with acdfacdf. Hence in both of these cases, η\eta itself is an isomorphism, since all of its components are — and thus the remaining square in FF is also a pullback, since we have shown that it is so in u *u *Fu_\ast u^\ast F.

This may seem like a slightly roundabout way to prove such a basic lemma. But its merit is that it generalizes directly to any other context in which we have a notion of “Kan extension” and “exact square” — in particular, to derivators. I won’t go into the definition of derivators today, since this post is already too long; you can find it on the nlab and in the references cited there. The most important change, when working with derivators, is that instead of exact squares we have to consider homotopy exact squares, which satisfy the analogous property for all homotopy Kan extensions (a.k.a. Kan extensions in (,1)(\infty,1)-categories). Being homotopy exact is a stronger property than being merely exact: the corresponding combinatorial characterization says that the functor (y/X/z)W(g(y),f(z))(y/X/z) \to W(g(y),f(z)) induces a weak equivalence of nerves, rather than merely a bijection of connected components. (In fact, one might argue that a priori this is actually a more natural demand – why cut things off at level 0?) Fortunately, in practice it’s often also easy to verify this stronger condition, and all of the exact squares I drew above are actually homotopy exact; thus exactly the same proofs work in any derivator, and hence in any (,1)(\infty,1)-category.

Posted at June 21, 2010 9:07 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2231

12 Comments & 1 Trackback

Re: Exact Squares

I tink I am following so far. What I still have only a vague feeling for – not having worked with this myself – is what this does for us in practice.

At derivator you write:

This gives a convenient way to compute many homotopy limits and colimits, which works in any derivator, and a fortiori in any (∞,1)-category or model category. Example applications can be found at homotopy exact square.

So I looked at homotopy exact square, which is very nice. But it still does not quite seem to provide me with convenient ways to compute homotopy limits. Or maybe I am missing something.

The first example listed there says that, as you amplified here, homotopy Kan extensions are pointwise. It’s nice to see how this relates to homotopy exactness of comma squares, but this is not a new technique for computing homotopy limits, or is it?

Similarly for the example of extension along fully faithful functors: it’s good to see the reformulation in terms of homotopy exact squares, but have we really gained a “convenient way to compute” the homotopy limit?

You probably have some examples where the computation of a concrete homotopy (co)limit is facilitated by appealing to exact square-reasoning. Could you spell one such example out?

Posted by: Urs Schreiber on June 23, 2010 9:18 AM | Permalink | Reply to this

Re: Exact Squares

I was thinking of the composite-of-pullback-squares I described at the end of the post as a “computation” of a sort: it tells you how to “compute” what happens if you take a pullback and then another pullback. There admittedly aren’t lots of examples yet at homotopy exact square; I’m planning to put this one there, and more as I get around to it.

You might also get a feel for how these “computations” go by looking at some of the arguments about pointed derivators, although those are also incomplete on the nLab. Pointed and stable derivators seem to be the ones which a lot of people studying derivators have put the most work into (although I haven’t gotten very far yet into Grothendieck’s tome).

Posted by: Mike Shulman on June 24, 2010 5:19 AM | Permalink | Reply to this

Re: Exact Squares

For example, here’s a situation in studying pointed derivators where you might care about composing pullback squares. Given a map f:XYf\colon X\to Y in a pointed derivator, its fiber is by definition the pullback fib(f) * X f Y \array{ fib(f) & \to & \ast \\ \downarrow & &\downarrow \\ X & \underset{f}{\to} & Y} If we then take the fiber again of the map fib(f)Xfib(f) \to X, we have a pair of pullback squares Z fib(f) * * X f Y \array{ Z & \to & fib(f) & \to & \ast \\ \downarrow & &\downarrow & &\downarrow \\ \ast & \to & X & \underset{f}{\to} & Y} and knowing that the outer rectangle is again a pullback square: Z * * Y \array{ Z & \to & \ast \\ \downarrow & &\downarrow \\ \ast & \to & Y} tells us that ZZ is, again by definition, the loop space of YY. In this way we get “fiber sequences” in any pointed derivator, and dually cofiber sequences. If the derivator is not just pointed but stable, then these are the distinguished triangles.

Posted by: Mike Shulman on June 25, 2010 8:18 PM | Permalink | Reply to this

Re: Exact Squares

For example, here’s a situation in studying pointed derivators where you might care about composing pullback squares.

I maybe don’t understand yet the point you are making now.

Certainly we do care that (,1)(\infty,1)-pullbacks and -pushouts satisfy the pasting law of ordinary pullbacks, and we know it is true for models of (,1)(\infty,1)-categories given by quasi-categories, by simplicially enriched categories and by model categories, at least.

So it’s good to see how to prove this in a derivator. But how is that now a technique for computing these?

What you just stated is just the general abstract reasoning behind fiber sequences. Or not? What is the extra value of derivators that I should take notice of here?

Posted by: Urs Schreiber on June 25, 2010 8:30 PM | Permalink | Reply to this

Re: Exact Squares

I think maybe you misunderstand what I mean by “computing.” That is what I mean by “computing” — a formal calculus that tells you what some new limit is in terms of things you already knew. What do you mean by “computing”?

(Whenever I talk about “computing” something in category theory, I feel a little guilty, because quite a while ago, when a friend of mine who was an algebraic geometer was trying to understand what being a category theorist is like, she asked me whether I ever actually “computed” anything. I gathered that the answer was no, we never really do “compute” anything in the sense she was thinking of; in particular we don’t ever really compute something and get a surprising result, the same way that you might compute the cohomology of some space and be surprised to find out that it’s zero in all odd degrees. But since category theory is a “meta” sort of mathematics in some ways, I think we’re justified in using words like “computing” and “applications” with more a abstract sort of meaning than most other mathematicians might.)

The value of derivators here, if there is any, is that I was able to prove that pullback squares compose in a couple of (hopefully comprehensible) paragraphs, using language and technology that are easy for me to understand. The nLab page fiber sequence remarks that “Since the (∞,1)-categorical homotopy pullback compose just as ordinary pullback diagrams do,…” with no reference to a proof; presumably this fact is in HTT somewhere? I certainly wouldn’t know how to go about proving it using quasi-categories or simplicially enriched categories, but after thinking about derivators for just a little while I was able to write down the proof in that language quite easily myself. Remember what I said back here, that I’d like to be as comfortable manipulating (∞,1)-categories as I am manipulating 1-categories? Derivators give me a language that I can understand easily which suffices for many (,1)(\infty,1)-categorical manipulations.

Now, possibly in this case the direct (,1)(\infty,1)-categorical proof is not hard at all! If you can point me to it, I (and you, and anyone else listening in) can compare them in terms of ease of understanding and write-down-ability. That could be a really useful test case. Of course, just one example wouldn’t really be conclusive either way, but if it turns out more generally that (,1)(\infty,1)-categories really aren’t any harder than derivators, I’d happily forget about the latter.

Posted by: Mike Shulman on June 26, 2010 6:04 AM | Permalink | Reply to this

Re: Exact Squares

That helps, now I know what you mean. So you have in fact here a proof of the pasting property of (,1)(\infty,1)-pullbacks/pushouts in terms of derivators.

Okay, very good. We should add this to the nnLab.

The statement of the pasting laws for pullbacks/pushout in a quasicategory is mentioned and referenced at nnLab: limit in a quasi-category – pasting law of pushouts.

(I have spent a minute now trying to better highlight and cross-linking this, but I see that it is still rather suboptimal.)

Concerning computation: I see what you mean, but we can also more concretely compute (,1)(\infty,1)-categorical pullbacks, I’d say. At homotopy limit - Examples some basic explicit “computations” are spelled out.

Or right this moment I am busy typing out concrete computations of certain homotopy pullbacks at \infty-Lie groupoid. I do this mostly by finding fibrant replacements for pullback diagrams in the corresponding model category and then determining their ordinary 1-categorical pullback. If derivators gave me an alternative way to obtain the results of such computations, up to equivalence, I’d be very interested in this. That’s why I was trying to better see what you had in mind here.

Posted by: Urs Schreiber on June 26, 2010 8:02 AM | Permalink | Reply to this

Re: Exact Squares

Whenever I talk about “computing” something in category theory, I feel a little guilty, because quite a while ago, when a friend of mine who was an algebraic geometer was trying to understand what being a category theorist is like, she asked me whether I ever actually “computed” anything. I gathered that the answer was no, we never really do “compute” anything in the sense she was thinking of; in particular we don’t ever really compute something and get a surprising result, the same way that you might compute the cohomology of some space and be surprised to find out that it’s zero in all odd degrees.

That’s funny, because I constantly feel like I’m computing something when I do category theory! I see what you mean though.

In my head there’s not much semantic difference between ‘computing’ and ‘calculating’, and it could be helpful to remember the link between ‘calculate’ and ‘calculus’. I don’t necessarily mean the calculus of Newton and Leibniz, although that could be an example of what I mean; I also mean the ‘calculus of relations’, ‘sequent calculus’, etc.: a body of techniques which can be codified in terms of more or less formal rules which can be applied more or less mechanically, which don’t require any inspiration, so to speak, just perspiration. That seems to be in line with what you’re saying, but of course it happens all over mathematics.

In a page I’m writing at the Lab, I write “let’s now compute blah blah” because it sounds a little more dignified than “let’s now figure out what this thing looks like”. I then apply certain techniques to bring the thing down from a highly abstract description to something more explicit and concrete, and it seems to me that is analogous to the physicist’s “get the numbers out”. So maybe that’s what I usually mean by compute: calculate in a way which brings down the level of abstraction a few notches.

Posted by: Todd Trimble on June 26, 2010 1:30 PM | Permalink | Reply to this

Re: Exact Squares

a body of techniques which can be codified in terms of more or less formal rules which can be applied more or less mechanically, which don’t require any inspiration, so to speak, just perspiration.

Yes, I think that’s about what I usually mean by “compute.” I think it’s fair to describe the calculus of homotopy Kan extensions and homotopy exact squares in derivators that way, although it does require some skill in selecting the right exact squares to look at (which I’m still in the process of developing). But other “calculi” also involve some skill as well in addition to pure mechanical manipulation, e.g. in doing integration by substitution, or calculating with spectral sequences.

Probably the calculus of various kinds of fibrations and anodyne maps in quasicategories could also be described in that way, but so far the calculus of exact squares in derivators is proving easier for me to understand.

Posted by: Mike Shulman on June 27, 2010 1:20 AM | Permalink | Reply to this

Re: Exact Squares

The statement of the pasting laws for pullbacks/pushout in a quasicategory is mentioned and referenced at nLab: limit in a quasi-category – pasting law of pushouts.

Thanks. Maybe this should be discussed at homotopy pullback, or if that is too centered on model-category-like versions, at a page like (∞,1)-pullback.

The proof of HTT 4.4.2.1 is about as long as the proof I gave for a derivator, but it invokes some previous results that I haven’t tracked down yet. In notation similar to that which I was using, the argument seems to be that if we assume the right-hand square to be a pullback, then in the span C(F| cdf)C(F| bcdef)C(F| bde) C\downarrow (F|_{cdf}) \leftarrow C\downarrow (F|_{bcdef}) \rightarrow C\downarrow (F|_{bde}) of comma (,1)(\infty,1)-categories both maps are trivial fibrations, and hence a terminal object of C(F| cdf)C\downarrow (F|_{cdf}) (i.e. a pullback of the outer rectangle) is the same as a terminal object of C(F| bde)C\downarrow (F|_{bde}) (i.e. a pullback of the left-hand square). I hope I’m translating this correctly; I suspect there may be a typo in HTT because the theorem is stated for pushout squares, but the notation used in the proof looks to me like it is talking about pullbacks instead.

It’s not clear to me yet why those two maps are trivial fibrations. The right-hand map is said to be a trivial fibration because bdebcdefbde \hookrightarrow bcdef is left anodyne; how easy is it to verify that a functor is left anodyne? Do I need to try to build it up by gluing on left horns?

The left-hand map is said to be a composite of trivial fibrations C(F| bcdef)C(F| cdef)C(F| cdf) C\downarrow (F|_{bcdef}) \to C\downarrow (F|_{cdef}) \to C\downarrow (F|_{cdf}). No argument is given for why the second of these is a trivial fibration; should it be obvious to someone who’s been reading the whole book up to this point? The first is said to be a pullback of C(F| bcef)C(F| cef)C\downarrow (F|_{bcef}) \to C\downarrow (F|_{cef}); does this come from some argument like bcdef=cdef cefbcefbcdef = cdef \sqcup_{cef} bcef? And finally, I guess that C(F| bcef)C(F| cef)C\downarrow (F|_{bcef}) \to C\downarrow (F|_{cef}) is a trivial fibration because the right-hand square is assumed to be a pullback, although I don’t know exactly where to find that characterization.

we can also more concretely compute (∞,1)-categorical pullbacks, I’d say.

I don’t think derivators are going to be any help in that sort of computation, since it’s really more about the process of passing from a model category to an (,1)(\infty,1)-category, and the process of passing to a derivator is doing more or less the same thing.

Posted by: Mike Shulman on June 27, 2010 1:15 AM | Permalink | Reply to this

Re: Exact Squares

Maybe this should be discussed at homotopy pullback, or if that is too centered on model-category-like versions, at a page like (∞,1)-pullback.

All right, I wanted to do something else today now, but I have now created that entry and will try to be working on it. But the world-cup and some friends will get in my way in an hour or so…

The proof of HTT 4.4.2.1 is about as long as the proof I gave for a derivator,

Don’t get my previous messages wrong: I have no quarral with your claim that you have a nice proof of the (,1)(\infty,1)-pullback pasting law in terms of derivators. I quite appreciate it. It was just that when you said you had a “convenient way to compute homotopy limits” I was thinking you were speaking of doing something else.

By the way, I think it is fairly straightforward (if maybe a tad tedious) to check the pasting law for homotopy pullbacks in homotopical categories that are at least categories of fibrant objects (i.e. have a notion of fibrations). With the general statement that homotopy limits compute quasi-categorical limits that also gives the desired statement here.

All right, I hope to have some minutes now to work on that entry and address some of your technical quesions…

Posted by: Urs Schreiber on June 27, 2010 1:43 PM | Permalink | Reply to this

Re: Exact Squares

All right, I wanted to do something else today now, but I have now created that entry and will try to be working on it. But the world-cup and some friends will get in my way in an hour or so…

…first match over…

Here is now what I think is the detailed argument that the right hand map in the proof of the quasi-categorical pullback pasting law is a trivial (Kan) fibration, using the properties of left anodyne morphisms now further collected here which all go back to André Joyal’s semi-unpublished work.

That the left hand map is a trivial Kan fibration is “pretty clear”, I’d say, being essentially just a re-statement of the universaility of the pullback property. But I’ll try to type out the detailed argument now… unless Argentine and Mexico get in my way this time…

Posted by: Urs Schreiber on June 27, 2010 7:16 PM | Permalink | Reply to this

Re: Exact Squares

…first match over…

Yes, it was about as easy as that with England’s ragged defence. Mind you, the referee didn’t help (or the lego version).

Posted by: David Corfield on June 28, 2010 11:09 AM | Permalink | Reply to this
Read the post Productive Homotopy Pullbacks
Weblog: The n-Category Café
Excerpt: Certain squares built out of cartesian products are always homotopy pullbacks, in any derivator.
Tracked: November 19, 2011 7:28 AM

Post a New Comment