Book Review: Beyond Apollo, Barry N. Malzberg (1972)

https://sciencefictionruminations.com/2012/07/29/book-review-beyond-apollo-barry-n-malzberg-1972/

5/5 (Masterpiece — but please consider the caveats below before procuring a copy)

(This review is a product of lengthy dialogues with my girlfriend, a graduate student in English, who devoured the work with great relish and enthusiasm. Her remarkable eye peeled away levels I didn’t even know existed and heightened my appreciation for this underread classic. I owe large portions of this review to her.)

Barry N. Malzberg’s Beyond Apollo (1972), the third of his novels I’ve read (Conversations, In the Enclosure, Guernica Night), is generally considered his best work (he won the inaugural John Campbell Award for best Novel). In a genre infrequently blessed with literary experimentation — of course, there are a few exceptions, Ballard’s The Atrocity Exhibition (1970), Joanna Russ’ The Female Man (1975), Russ’ And Chaos Died (1970), and John Brunner’s Stand on Zanzibar (1968) among others — I’m always more predisposed to works which are structurally/stylistically inventive and thought-provoking. Barry N. Malzberg’s Beyond Apollo more than fulfills both longings.

Beyond Apollo is a masterpiece; a multi-faceted rumination on repression; a virulent critique of the space program and America’s obsession with space; a metafictional labyrinth that can, at times, be infuriatingly undefined.

Brief Plot Summary and Discussion of the Character of Harry Evans

Beyond Apollo is not a plot driven work. A vague outline can be gathered by the ultimately cyclical 67 short chapters recounting memories, events, tellings/re-tellings/and re-tellings of re-tellings of said memories and events. Harry Evans, the only survivor of a two-man expedition to Venus, recounts his story in an endless series of re-tellings. The main crux of the novel surrounds the death of the Captain of the expedition whom Harry claims is insane (11).* Was it an accident? Was it due to some strange effect emerging from Venus? Did Harry kill him? Was it self-defense? If he did kill him, how (the trash dispenser, a brute attack)? Why (was it because of the captain’s sexual advances, was it over a game, etc)? Evans is interred in a psychiatric institution and interrogated by authorities over the voyage and the fate of the Captain.

Critics often focus their critique on the lack of resolution concerning what actually happened on the voyage. The reader speculates that Harry is the culprit but the “truth” is not obvious. However, there are many hints (discussed below) that suggest that the Captain isn’t a real person at all. Malzberg purposefully constructs the novel so that we’ll never know exactly what happened on the trip (even if one of the versions Harry describes is indeed what happened). Not only is our anti-hero the only one to return from Venus but also it is clear that he is purposefully obfuscating or does not know the truth. Perhaps he’s unconsciously obscuring his own memories or, paradoxically, trying to prove a point about the dangerous environment of the space program and has gone insane doing it. Evans has definitely been scarred either due to his experience onboard the vessel or on his disastrous previous mission to Mars. After each of his re-tellings claims that what he had previously said about Venus was a lie and that he’ll now tell the truth:

“‘Oh,” I say, “I forgot to tell you. I mean I lied to you the first time around and now I can’t bear to lie any more because I see how crucial the information is. The Captain never said anything about having nuclear devices.’” (40)

And:

‘”I’m convinced,” I say. “I don’t want to live in a tube; I want to see the sun again, to receive a commendation from the President, and someday even to remarry. Because you will agree, I really can’t live with this particular woman any more; we never got along. So let me confess: let me tell you. Venus, it turns out, is populated by an intelligent race of malevolent green snakes.’” ( 38-39)

Two “games” are played to find out the truth. On one level Evans is interrogated by Forrest (the psychiatrist) about the “truth” of the trip. In other chapters Evans and the Captain play a game while on the voyage about the reason behind the Venus mission in the first place:

“the Truth must be absolutely; there must be no hedging of lying; and the game will continue until each of use either answers three questions satisfactorily or refuses to respond, in which case the persion who has asked the question will be the winner. If there is any suspicion of lying, the one under suspicion will have exactly thirty seconds to prove his statement or lose” (28).

There are also dream conversations between Evans and the Captain (who may or may not be a manifestation of himself); dream conversations with dead relatives; dream conversations with Forrest; an unusual history of the space program up the the Venus mission; cryptograms; and lengthy interludes describing in detail Evans’ own sex life and Evan’s imagining of the Captain’s sex life.

Malzberg’s Metafictional Techniques

Malzberg’s fiction is heavily inspired by the rise of postmodernism in the mid to late 1960s (Borges, Beckett, Burroughs, Calvino, etc). Many of the 67 fragmented chapters are about the novel that Harry Evans writes about his expedition to Venus. The implication is that this 67 chaptered novel is indeed the version that Harry Evans wrote about the expedition of Venus.

“In the novel I plan to write of the voyage, the Captain will be a fall, grim man with piercing eyes who has no fear of space. “Onward!” I will hear him shout. “Fuck the bastards. Fuck control base; they’re only a bunch of pimps for the politicians anyway. We’ll make the green planet yet or plunge into the sun. Venus forever! To Venus! Shut off all the receivers now. Take no messages. Listen to nothing they have to say; they only want to lie about us to keep the administrators content. Venus or death! (11)”

The entire work is thus what Harry Evans presents about the expedition. In a novel that is obsessive about characters seeking the truth Malzberg layers the narrative with truth obscuring meta-narrative techniques. There are multiple points where Malzberg has Evans explain the technique regarding assessing the truth that should be used to approach the novel. Evans points out that novel he will write “will be able to apprehend the truth because throughout the whole sweep and scope of the book there will not be a single moment, a passage so precise and detailed that I will have to come to grips with myself and my true relation to the Captain” (110). Yet earlier, Evans explained that “what happened can be indicated only in small flashes of light, tiny apertures which, like periscopes, will illuminate some speck of an overall situation so large that none of us can comprehend it” (47). Thus, the scenes themselves are not specific or concrete enough to indicate exactly what happened between Evans and the Captain but does indicate the larger truth of the experience. But, the very fact that we are reading a novel written by the character in the novel about his own experience calls even this broad statement into question.

There are distinct moments in the narrative that feel “true” as with every novel with autobiographical inspiration. For example, a few lines in conversations with Forrest which aren’t in Evan’s dreams suggest a kernel of truth. Three such scenes imply that the Captain and Evans are the same person. First, Evans knows about nuclear devices on the spaceship that only the Captain knew (40). Of course, after Evans claims that the vessel had nuclear weapons that only the Captain knew about he claims that he “made up the part about his saying we have explosives. I made up all of it” (40). Later in the novel Forrest asks, “‘your latest and your last chance, Colonel,” he says, “to tell us what happens.” When he [Evans] hears the world colonel used for the first time in their relationship Evans twitches slightly but regains control of himself (61).” A colonel in the army and air force is the same rank as a Captain in the navy. The fact that the rank has such an effect on Evans suggests that he might indeed be the Captain. Likewise, in one of Evan’s many anagrams — (44). As with many of aspects of the novel, it is not entirely obvious that this is the “truth.”

Beyond Apollo is also a gendered critique on the hypermasculinity of the space program. Even though a great majority of sci-fi writers glorify the program and proclaim the positives of our obsession with exploration, there are many stories which discuss its flaws, the damage it has on families, how it manipulates young idealists: C. M. Kornbluth’s short story, ‘The Altar at Midnight’ (1952), James Gunn’s Station on Space (1958) are great examples. Malzberg’s goes beyond these earlier works and suggests that the space program makes hypermasculine machines out of men. This plays out in the extensive scenes between Evans and his wife: “We have been geared for efficiency. I begin to fuck her like a proper astronaut […]” (27).

Final Thoughts

For many sci-fi readers, this lack of concrete narrative and thus the necessary footing in order to glean “what is true/what actually happened”, will be off-putting. Added to his devious brew created by an unreliable narrator recounting for his interviewer the events of the voyage is a heavy dose of metafictional techniques and explicit (purposefully shocking) sexuality. Unless you are comfortable with this trifecta (and other postmodernist elements), I suggest avoiding his substantial corpus. For the braver readers out there who enjoy sci-fi with a literary, experimental, nihilistic turn, Malzberg’s ouvre is a veritable treasure trove.

A challenging and literary masterpiece…

Man of Steel vfx milestones

https://www.fxguide.com/fxfeatured/man-of-steel-vfx-milestones/

“Zack Snyder wanted Man of Steel to appear very natural because there’s some very fantastical things in there and he wanted people to suspend their disbelief, and we the visual effects team had to make it as easy as possible for them to do so.” So recounts overall visual effects supervisor John ‘DJ’ Desjardin on the philosophy behind Man of Steel’s visual style.

Desjardin notes that the intent was to shoot a more handheld (the DOP was Amir Mokri) and documentary-style film than previous outings in this comic book character’s ‘verse. “We had to think about what that would mean since we also had to photograph some crazy action,” says Desjardin. “So for a lot of the previs we did, we’d start to think where our cameras were and where our cameraman was. A lot of the rules are the Battlestar Galactica rules for the space cams that Garry Hurtzel developed for that mini-series where we want to make sure if we’re translating the camera at all it makes sense. Unless the action is so over the top, like in the end where Superman is beating up Zod – we had to break it a bit.”

fxguide talks to the major players responsible for bringing to life the visual effects of Man of Steel: overall supervisor John ‘DJ’ Desjardin, and Weta Digital, MPC and Double Negative. With so much work in the film, we delve down into just three of the many tech accomplishments:

  1. The tech of Krypton
  2. Live action and CG takeovers: the Smallville confrontation
  3. Destroying a city: the invasion of Metropolis

And we also take a look at PLF’s previs work, Scanline’s tornado and oil rig effects and Look Effects’ work on the bus crash.

The tech of Krypton

Act I of the film takes place on Krypton, facing destruction from an instable core. Weta Digital created alien planet environments, creatures and also the key means of display – a technology the filmmakers came to call ‘liquid geo’ meaning liquid geometry. “Basically,” explains Weta Digital visual effects supervisor Dan Lemmon, “it’s a bunch of silver beads that are suspended through a magnetic field, and the machine is able to control that magnetic field so that the collection of beads behave almost like three-dimensional pixels, and they can create a surface that floats in the air and describes whatever the thing is you’re supposed to be seeing.”

The liquid geo devices appear in the planet Krypton scenes, as well as later sequences on the Kryptonian ship the Black Zero. Similar technology making up a panel display resembling a Greco-Roman bas-relief – but achieved via a different method – is present in a scene in which a hologram of Superman’s father, Jor-El, explains the history of Krypton to his son.

In creating the liquid geo which took the form of anything from wide planet views, x-rays, displays on floating robots and even to depict Jor-El communicating with his wife Lara; Weta Digital took these steps:

  1. The look – The beads, which up close would appear to be pyramids with a slight bevel, were designed to create a surface of the object they were depicting inside some kind of console. “Essentially we would have the normals of the objects that we were targeting provide a simulation with an orientation that one of the most dominant sides of the pyramid would align with,” explains Weta Digital lead FX TD Brian Goodwin.
  2. Modeling and animation – The models used for animation ranged from purpose-built (Lara’s face) to ones appearing in grander scenes (such as approaching scout ships). Says Goodwin: “We had to develop a pipeline to bring in assets, so instead of going through the route of reducing the polygon count to something usable what we would then do – you would take the model in whatever way it was made and just scatter discrete points onto it, and extract the matrix onto the animation and copy these points onto the matrix and have these sparse points behaving in a way that the model would.”

“We had animation provide us with geometry that we would then track beads on,” adds Goodwin. “Those beads would then be turned active in front of the actual console and the console would decide what beads it needed to provide to actually draw particles from the actual earth we described or this invisible bowl.”

  1. Simulation – After animation, artists ‘copied’ little beads onto the animated geometry for a pre-sim’d lighting version to get approval on how the object would read. Sims were then run “on all the targets which would be discrete beads floating around on top of the surface which would have its own set of parameters,” says Goodwin. “The bead size or the turbulence that would crawl along the surface constantly updating the orientation was based on the normal provided by the surface. That was then saved to disk and we would use that sim as the final target for the simulation.”

Something different

Weta Digital senior visual effects supervisor says the liquid geo look “stemmed from an idea that Zack Snyder had. He wanted to do something that was interesting and different in the way that you saw information presented. He didn’t want to do the typical screen. So one of the ideas Zack had was to make it a little bit more tactile – we looked at different things – the idea of that pinboard that you put your hand in and you see the shapes sort of form was what we had in our minds, but the more we looked at that the more we realized you can’t do just something like that it’s too simple and limiting – the shapes really need to transform – you need something that has that look but be more liquid.”

The sims were based on a fluid sim. “We used Houdini’s internal FLIP solver that gave us the pressure, the sense of volume maintenance,” notes Goodwin. “We’d have the console sim inside an invisible membrane, and there would be currents that we would describe with what we would find aesthetically pleasing within the shot.”

“The console was like a cup turned to the side,” adds Goodwin, “so whereas gravity would be Y pulling ‘down’, in this case the Y is facing into the back of the ‘cup’ console, ” which means essentially gravity is pulling from the back of the cup towards the actors watching the display. The beads then fall (after they pass some threshold) – towards the inside of the geometry. It is as if one poured the beads into a glass bowl, all filmed from below the bowl looking up, – except everything is turned on its side, so the beads fall towards the viewer, and the geometry of say a planet Earth is a hollow glass bowl between the sea of original beads and the viewer. “We sort of reversed it by having gravity faced towards the front of the membrane which would be the meniscus and the surface of the actual water was the front of the console, so we would have a constant force pushing towards the front which would give us a sense of a flat surface with water traveling around on it, but this surface was never rendered.”

  1. Noise – After simulation, Weta Digital ran every bead through a temporal filter to remove jitter. “Even with the highest RenderMan settings we would still face a lot of noise, and that led us to taking out all the noise within the simulation,” continues Goodwin. “Even the most subtle twist of the bead half a degree, 2 degrees, would, because of being mostly specular, would result in seeing a completely different point within the IBL (Image Based Lighting) and that would create a tremendous amount of variation between the slightest bit of movement. By filtering it, it softened the whole piece out, to the point of sometimes we needed to get a little bit of grittiness back in because smoothing out the beads too far would look too boring.”

The team could control the flow from back to front and back again. “We allow the simulation, to go through a series of noise fields to make it a little bit more interesting and then join the target,” says Goodwin. “Then once it’s joined the target it would essentially no longer be registered in the simulation. Once the target appears we release it and it finds its way back into the simulation, by using the opposite force – we essentially use level sets to create some sort of pressure and it would know when it was inside or outside this world and a bunch of rules that would dictate whether it was allowed to be outside.”

  1. Lighting and rendering – Lighting solutions were from taken the set. For the consoles, Weta moved to the next level of RenderMan to take advantage of improved raytracing and instancing objects. Motion blur was also a particular challenge. “We had the traditional motion blur – in that our particles do technically move,” says Goodwin. “We did a test where we rendered the objects and we would compare the motion blur represented from the object’s motion blur literally straight from animation and we would line that up with the render we would get out of the beads. In some cases we would have the vectors that were provided to the renderer shortened ever so slightly, so that we would have, as the beads form a target across two frames, would result in spikes within the motion. You’d have a bead that travels across the length of the frame and you’d have a long streaky specular highlight – ultimately it’s shading at one end and smudging it and in that case we’d shorten the motion blur so it wouldn’t create bright little spikes.”

The history lesson bead shots were created slightly differently and without an underlying sim. “In addition to working out all the technical aspects, just figuring out aesthetically and creatively what actual images we would use to tell the story of Krypton was really important,” says Lemmon. “We did quite a lot of concept art based on various sculptures. We looked at bas-relief from the Rockefeller Center, we looked at Greco-Roman references and explored those kind of aesthetic looks but applied to spaceships and alien planets and alien technology – if you were depicting a sci-fi world through the medium of stone sculpture, what that might look like.”

Weta Digital had originally planned to do these shots with the liquid geo simulation engine as well, but ultimately the look required was a different one. “It’s more of a relief style,” says Goodwin. “It actually looks like the space that each object exists within looks like it exists within a world that’s flattened. The idea was that it went from an idea of being a simulation forming things to being a relief. If you look closely the background is flowing with the text and graphics – the beads travel along it – but outside of that, things weren’t actually moving to and fro.”

The action for the history lesson was animated based on greenscreen performances by Russell Crowe and Henry Cavill in what became, according to Goodwin, ‘humongous spaces’. “It was traveling hundreds of thousands of meters in a Maya scene, then we sent that through a projection,” says Goodwin. “It was all animated in world space and would then send that through a transformation which we would then project onto a back wall and relief it. We needed to represent all the information in a confined space.”

The liquid geo shots on Krypton occur while the planet is both under siege from Zod’s crew and as it becomes unstable and, ultimately, implodes. Before that happens, wide views of Krypton depict an alien atmosphere, which are mostly Weta Digital environments, spacecraft and creatures.

One shot of Jor-El riding a winged creature made use of a buck and gimbal set up to replicate the move fashioned in previs. “We shot elements of Russell Crowe in his flying costume on that buck,” explains Lemmon, “and using previs as a guide tried to match both camera and the movement of the gimbal to the previs. Then we put those elements onto a digital creature and a digital world. But of course some of the stuff that moved in such a way that it wasn’t possible to get those movements. There were shots where we transitioned in and out – we go all digital – then Russell Crowe for just five frames or so, and then back to a digital character.”

For some dramatic shots of Zod’s ships approaching the house of El, Weta Digital referenced scenes from Apocalypse Now. “Zod flies in these attack ships and his descent on The House of El was modeled on the Rise of the Valkyries sequence,” says Lemmon. There’s actually a shot that everybody thinks is in Apocalypse Now, but isn’t, it’s in the Apocalypse Now poster that – the sun shot – the ships flying out of the sun.”

Suits of armour worn by characters on Krypton were mostly CG additions to the actors wearing gray suits with tracking markers (although female characters wore practical armor on set). The tracking of these shots is therefore particularly complex to match all the movements of actors sometimes engaged in hand to hand combat such as Zod’s attack after Superman’s pod is launched from Krypton.

A phaser battle contained a specific look for blasts with plasma residue. “Those were mostly Houdini simulations,” notes Lemmon. “We wanted to avoid a straight laser beam and do something that had a little bit more interest in it. The idea was the beam moves through the air and charges and ionizes particles in the air. In Krypton there’s particles that float in the air the same way that dust does here, but we treated them as if they were heavier and got more excited by the beam. As that the beam moves through the air it glows and starts to swim a little bit and leave that residue, particularly when it hits somebody.”

Aerial battle shots employed Krypton’s hazy environment and shafts of light through rock pillars to add depth. “In busy sequences like that it’s important to compose things so that you can actually see what’s going on and see who’s good and bad and who’s winning,” states Lemmon. “One thing that drives me nuts in big action sequences is when you can’t actually – it’s just noise – and you can’t see what’s going on.”

Later, as the planet begins to destroy itself, the studio worked to show various angles of the destruction including a ‘from space’ view. Lemmon says he enjoyed “figuring out how it would look – playing it out as a geo-thermal event that’s influenced by the planet’s magnetic field, and maybe have it collapse along the equator rather than blow out spherically and implode first then explode afterwards. Playing around with those ideas was a lot of fun.”

Live action and CG takeovers: the Smallville confrontation

A major challenge faced by the filmmakers and visual effects crew on Man of Steel was to realize elaborate close-combat fight scenes between Superman and his Kryptonian foes. They wanted to take advantage of digital effects to portray superhuman strength and powers, but without what had been perceived previously as ‘cutting’ from live action to an obvious digi-double and environment. Instead, the filmmakers wanted these shots to be executed as seamless takeovers.

Desjardin explains: “When we do these fights and these hyper-real things, we don’t want to do the traditional, ‘OK I’m a cameraman, I’m shooting a clean plate, I’m going to pan over here to follow the action that’s not really there yet but we’ll put the action in later. Because that’s us animating the characters to the camera. So we would do that animation with the characters – grappling, punching or flying away – and we would take the real guys up until the point until they were supposed to do that and we’d cut. Then we’d put an environment camera there and take the environment. And then a camera for reference of the actors and get each moment. So then we had a set of hi-res stills for the environment and the characters. Then in post we take the digi-doubles and animate them according to the speeds we want them to move in our digital environment.”

This approach was pioneered for the Smallville encounter in which Superman confronts Zod and his crew after they have threatened Martha Kent. They fight on the streets of the town and are further attacked by the military via A-10s and ground assault troops. MPC handled visual effects for this sequence (in addition to many other shots in the film ranging from Arctic scenes to shots in the upper atmosphere when Louis and Superman are taken to the Black Zero).

In order for the seamless takeovers to occur – and for the shot to continue with a pan, tilt or other movie – a new capture and post-production process was proposed by MPC visual effects supervisor Guillaume Rocheron, in conjunction with Desjardin. Here’s how it broke down for a typical Smallville shot:

  1. The shot would be previs’d and particular fight choreography for the fights established by stunt coordinator Damon Caro.
  2. Knowing from the previs the shot that was required, live action portions of the scene would be filmed in little pieces. “If say Superman was being punched and would land 50 meters away, we would shoot our start position and end position, and then bridge that gap with the CG takeovers,” says Rocheron.
  3. A camera rig dubbed the ‘Shandy-cam’ (named after on-set VFX coordinator Shandy Lashley) obtained keyframes of the actor. “It’s a six still camera rig that’s built on a pipe rig so that you can run it in at the end of a setup and get stills of keyframes of a performance or an expression,” says Desjardin, “and then we could use those hi-res stills to project onto the CG double and get really accurate transition lighting and color – right from the set.”
  4. On set, another camera rig was also used to capture the environment. “We ended up calling it Enviro-cam,” notes Rocheron. “It was a rig where we mount a Canon 5D and a motorized nodal head, and that allows us to capture to full 360 environments at 55K resolution for every single shot. The capture time is very quick – we were taking between 2 and 4 minutes for every shot, so it was really easy – the same way we capture HDRIs.”

“The sets are there so why not capture them?,” says Rocheron. “It basically allows you to film what is not filmable. Here there’s no cuts with no interruption. We also did a lot of entirely digital shots which had no live action. So we had our Enviro-cam, so we used that to capture the environment rather than a plate and we could put our CG characters in there.”

The set capture resulted in lighting and textures that could be re-projected onto geometry (the sets were also LIDAR’d to aid in reconstruction). “We wrote a little pipeline in Nuke that allowed us to stitch all the photos together and then very simply calibrate them with the Smallville geometry,” says Rocheron. “We would calibrate just one angle, because for the full dome, all the photos would get automatically calibrated on the geometry. For us it was a very good process – it wasn’t just a sphere. Everything was re-projected in two and a half D on the geometry to get parallax and the camera would travel technically in all directions.”

Superman is also seen in several sequences, of course, flying though clouds. “We used volumetric clouds,” says Rocheron, “using an internal tool we have for clouds to mobilize geometry and transform it into volumes and refine with layers of advection and noises for the fine details.” In terms of environments Superman flies through, such as over the Arctic circle, over canyons in Utah, over Africa, and over the Dover cliffs, MPC developed these first in Terragen and then took them through to matte paintings and geometry.

  1. Full-screen digi-doubles were of course a major component. MPC led the digi-double Superman, Zod, Faora and other Kryptonian creations which were shared with other vendors. Digital armour was also added along with the energy-based Kryptonian helmets. Cyberscan and FACS sessions were conducted with the actors, and polarized and non-polarized reference photos were taken. Superman’s cape and costume were scanned in high detail – the cape in particular became a direct extensions of Superman’s actions. “Our main reference for the cape were illustrations from Alex Ross,” states Rocheron. “We had the cape here at MPC so we could really study its thickness and the velvetness. The light is very soft on it. We did a cloth solve in nCloth and we wrote a number of tools in animation to be able to animate the cape and see it in real time. Once the animation was approved, we had a basic representation of the cape and we would then use that to drive the nCloth simulation.”

Superman’s cape

“Zack was very amenable to shooting a lot of stuff with Henry Cavill without using a real cape at all,” says Desjardin, noting that as the cape was envisaged almost as its own character, it would need significant visual effects art direction.

But the cape’s VFX also had to remain within the illusion of the filmmaking. “There were a couple of shots where someone might say, ‘I don’t quite like the way the cape moves there because it looks like it’s a real cape with a wire attached to the end of it to pull it,’ and Zack would say, ‘That’s fine! Don’t change that animation – I want people to think that maybe we did that, even though it’s a CG cape.’”

MPC used the latest version of RenderMan and its raytracing capabilities to help with the chainmaille look of Superman’s suit and the Kryptonian suits and armour. “They were all painted as displacements but we did hi-res displacement,” says Rocheron. “Raytracing allows you to capture that very subtle detail between the reflective pattern of the chainmaille and the light absorption of the blue part of the suit, since the underlying layer is bleeding through. We have infinite area lights which are the dome and the finite direct area lights that are the direct light sources you want to position in space. And that physically based setup gave us a terrific look for the reflections and the fall-off of the light – really key to get the details of the suit and armour, which in reality was mostly black.”

In one shot, Superman fist fights with an eight foot Kryptonian. “We shot a live action piece and just replaced the performance capture stunt guy and added the cape onto Superman,” says Rocheron. Then we just thought about it and said it would be much cooler – since the Kryptonian is very tall then Superman should fight against him while he’s hovering. We did those shots as entirely digital shots. It has that very cool feeling of flying around and punching him from all different directions.”

  1. For each shot, it then became a matter of choosing the right transition point. “There’s a little transition zone that’s maybe only one or two frames,” says Desjardin. “We knew that we wanted to keep Superman real in certain places because it was say super-sharp and we want to use that to anchor the shot, even if just for a couple of frames, and then we’re going to go into digital because it’s crazy right after this.”

“We layered a couple of other things on top of that,” adds Desjardin. “One, if there’s a punch being thrown, you can lose the arm real fast if it’s too fast. So a lot of times the arm of a CG character may be going just slightly faster than a human’s – we put a sonic boom-type signature around the forearm and we might put a little heat luminance on the leading edge surfaces of the fist. It puts an idea in your brain that it’s moving really, really fast even if it isn’t.”

  1. Not only were fights being depicted with digi-doubles and environments, they also traversed cornfields, through buildings, glass walls, on roads and against flying A-10s. That necessitated incredible destruction and for this MPC looked to its finite element analysis tool Kali, which had first been developed for the wooden pagoda destruction in Synder’s Sucker Punch. This time around, following a few years of development, Kali was able to handle so many more kinds of materials. “So we could take a tarmac and break it differently,” explains Rocheron. “It’s more resistant so it has a crater but cracks at certain places near the surface. And Superman crashes into a bank vault and crashes through a glass door with a metal frame and finally into the vault which is made of super strong steel so we made that bend and wraps around him.” Particle sims and Flowline were also incorporated into the destruction pipeline for Smallville.

Destroying a city: the invasion of Metropolis

Determined to conquer Earth and transform it into a new Krypton with ‘world engine’ machines, Zod launches northern and southern hemisphere strikes in Metropolis and the Indian Ocean, respectively. The result, until Superman battles and then defeats the Indian Ocean world engine, is that significant parts of Metropolis and its soaring skyscrapers are destroyed – a task given to Double Negative. The studio also realized further full-scale destruction as Superman and Zod wreak havoc on remaining buildings and each other.

“Down in Metropolis there was a very clear design edict that came from Zack about how the evolution of the battle was going to be,” says Desjardin of the lighting design for the film’s third Act. “The sun had to be not quite setting when the Black Zero comes down and then very quickly it’s in its setting position and by the time Superman and Zod go to fight it’s down below the horizon and there’s a Hawaiian cloud and colorful clouds and it’s getting dark with a twilight sky – more an ambient look. Then once Supe and Zod jump up into that sky then you have some other lighting options with the sky and certain lit billboards. It’s a way to make the city come alive to make it even more dramatic to keep characters backlit.”

To create a convincing Metropolis, Dneg looked to Esri’s CityEngine to help procedurally deliver the city, a tool it had first employed for the sprawling future world of Total Recall. “That was a much more sci-fi based role,” notes Dneg visual effects supervisor Ged Wright, “so we took what they had done and extended it a great deal. The work we were doing was based around the Downtowns for New York, LA and Chicago and that gave us the building volumes for heights. We’d skin those volumes with kit parts but most of it then had to fall down! So we had to rig it for destruction and use it for other aspects of the work as well.”

The previs effort

Director Zack Snyder worked closely with previs and postvis artists at Pixel Liberation Front to orchestrate some of Man of Steel’s most dynamic scenes.

The studio worked on more than 15 sequences, with PLF supervisor Kyle Robinson suggesting that some of the most challenging previs creations were the oil rig rescue and the Metropolis invasion. “They shot that oil rig rescue in a parking lot in Vancouver and they didn’t know how big the greenscreen needed to be, so we modeled it for them to the specify what was necessary.”

Superman’s cape was also a major challenge. “We didn’t run a cloth sim for the cape,” says Robinson. “If we had run a cloth sim it would only appear to react to the forces applied to it and not be a character in its own right. We rigged the cape to have a deformation in it and it had a bone in it so we could move and pose it, for the right position for comic book poses.”

Using Maya for main animation and its proprietary character and camera rig, with extra work done in After Effects, PLF followed Synder’s boards and art department concepts to flesh out scenes, and later help editorial with postvis.

For building destruction, in particular, the studio re-wrote its own asset system to be geared towards dynamic events. An implementation of the Bullet engine inside Houdini – dubbed Bang – became Dneg’s main destruction solver, with a core philosophy of allowing for quick iterations with heavy control. “We wanted to be able to run an RBD event and trigger all these secondary events, whether it was glass or dust simulations – all of those things needed to be chained up and handled in a procedural way,” says Wright. “One of the advantages of this was that, because it was all based around a limited number of input components, you can make sure they’re modeled in a way they’re useable in effects – you can model something but they’ll be another stage to rig it for destruction.”

In addition, fire, smoke and water simulation tools were further developed at Dneg. The studio moved from its existing proprietary volume renderer DNB to working in Houdini and rendering in Mantra for elements such as fireball sims. Dneg’s in-house fluids tool Squirt also benefited from new development to handle larger scale sims and interaction for more tightly coupled volumes and particles. Overall, the studio’s rendering pipeline has moved to a more physically based approach in RenderMan.

Within the Metropolis sequence there were other numerous requirements including attacking and destroying aircraft and, of course, digital representations of Superman and Zod when they fight. One particular element Dneg contributed and also shared with other facilities was Zod’s armour. “There was no practical armour for Zod,” states Wright, “he only wore a mocap suit. We took concept art, came up with a ZBrush sculpt of the armour and could show them turntables of what it would look like during filming.”

Dneg took MPC’s Superman and Zod models and adjusted them for their own pipeline in order to rig, groom hair and adjust shaders. “We also have more of a photogrammetry approach to facial,” says Wright, “so we made the actors sit there again with an eight camera rig – similar to Light Stage but portable and gives us polarized photography to reconstruct the facial expressions.”

Zod and Superman battle in amongst buildings and when they hit each other tend to generate enormous shockwaves that rip skyscrapers in half. Although much of this was completely digital (some live action was shot in Chicago and then on Vancouver greenscreen soundstages), Wright says Dneg implemented real photography onto its digital doubles wherever possible. “Because you have their performances you engage with it – and your eyes go straight to their faces. If they’re big enough in frame and doing something, you want to use a photograph of them. As soon as you buy that and get what’s going on, you’re more willing to take on board what’s going on with the rest of the frame.”

Adds Wright: “There’s one shot where Zod hits Superman up the side of the building. Superman is hovering above. Zod starts running up the side of the building. This is just before he rips his armour off and is taking in more of the sun’s energy. Superman flies down to hit him and the two of them collide causing that shockwave. DJ and Zack were both really keen to make it feel like two Gods were fighting, and they were at the height of their powers right then.”

Rounding out Man of Steel’s effects

Helping to round out the effects work on the film were companies like Scanline and Look Effects. Here’s how they added crucial shots to the film.

Scanline – tornado and oil rig

Scanline delivered shots of the tornado sequence in which Smallville residents shelter in an underpass from an approaching twister, while Clark Kent’s father Jonathan returns to a vehicle to rescue a pet dog. “For the tornado itself,” explains Scanline visual effects supervisor Chad Wiebe, “we actually came up with a unique methodology by combining a number of individual fluid sims which would be wrapped around the funnel. This allowed us to create a bigger and denser funnel without some of the overhead that would have been generated by trying to do a single sim for the entire funnel. This also allowed us to pick and choose from a library of different sims which gave us greater control over the look and speed of the funnel, the variation of different parts of the funnel, as well as the technical aspects such as density and resolution for some of the more close up shots.”

Along with the tornado, artists added ground dust and debris, farm buildings and uprooted trees. “We also had to create digital versions of all vehicles that were shot on set as well as a number of additional vehicles to suggest a longer line up of traffic stopped on the freeway,” says Wiebe. “As the sequence progresses most of the vehicles end up getting damaged or destroyed to some degree so in addition to a typical vehicle rig for the basic motions and wind buffeting, we also created a system where we could dynamically damage the vehicles based on collisions with one another or based on forces as was the case with the Kent truck which gets destroyed at the end of the sequence.”

An earlier sequene of Clark saving workers from a burning oil rig made use of reference of the BP Deepwater Horizon explosion and the Toronto Sunrise Propane factory explosion. “We tried to make sure we were as accurate as possible regarding the look of the fire and smoke plumes that are generated by oil fires, which have a very unique and identifiable quality,” notes Wiebe. “The exterior plates where shot with the actors on a set built helipad, with a real helicopter, and green screen surrounding 3/4 of the set. From there we created and entirely digital oil rig, and we would composite the actors and helipad onto our digital rig, or at times we would replace the helipad as well. Many of the hero helicopter shots also utilized a digital version of the helicopter in order to get the interactive lighting and reflections matching.”

“The oil rig collapse was a series of rigid body simulations created using Thinking Particles,” says Wiebe. “From there we would also add fire, smoke and dust trailing off the rig using Flowline, which was also used for the fluid sim when the rig came crashing down into the ocean below. There were also a series of explosions happening throughout the sequence also using Thinking Particles for the RBD’s and Flowline for the fire and smoke.”

Look Effects – the bus crash

In the film, Clark remembers key moments from his youth, including those they gave hints to himself – and others – of his tremendous powers. One is the crash of a schoolchildren-filled school bus. After it blows a tire and launches off a bridge into a river, Clark dives out of the bus and pushes it to the bank, and then rescues another child from under the water. Much of the crash was filmed practically on a bridge and quarry location, and then on a tank stage. Look Effects helped piece the scene together.

Some of the work included rig and camera removal and also clean-up of the bridge railing. “There was a POV camera angle from the bridge looking down at the bus as it was sinking into the water,” notes Look visual effects supervisor Max Ivins. “The bridge part was shot separately from the sinking shot and the exterior sinking was shot in a rock quarry so there was no moving water like the river. We did some CG replacements of the vents on top of the bus and we had to make the remnants of the splash when someone runs up to look at the bus over the edge. We added the foam ring and the bubbles coming up. Used stock footage and CG elements to make a post-splash surface of the water.”

For interior shots of the bus with the children, Look altered water levels to make the danger appear more prominent. “They had a surface outside of the bus that was basically the same as the inside of the bus – they couldn’t really sink it because there are kids involved,” explains Ivins. “So they had to make it look like the outside water was 2 feet taller than the inside water that is rushing in to give it that sinking feeling. So we did whole simulations and cleaned up some of the lighting. We made bubbles coming up and making it turbulent.”

Look’s other contributions to the film included several monitor comps, including ones at NORAD, and some artefact clean up for a flashback signature shot that had been time-ramped of Clark wearing a cape and with his hands on his hips in front of some blowing dandelion heads.

The Beauty Of 90s Anime Aesthetic

Anime continues to get more and more impressive with its visuals but as beautiful as some anime now look at times I feel they still don’t match up to some older anime specifically anime from the 90s the 90s anime aesthetic is to me the best look for anime it was at a time before anime would use as much digital art or cgi that it does currently this gave 90s anime a unique look the colors felt real and comforting the backgrounds were full of detail and could be stared at all day and the character designs were so charming and mostly everything was hand drawn and made with traditional animation I hope in this video to show you that just because the popular new digital anime is well newer does not mean it looks better than anime from the past.

Final Fantasy XIII Review

https://www.videogamer.com/reviews/final-fantasy-xiii-review/

After 20 hours, Final Fantasy XIII granted me permission to decide for myself which three playable characters should be in my party. After 25 hours, Final Fantasy XIII granted me permission to decide for myself how I should develop the characters in my party. After 30 hours, Final Fantasy XIII decided to let go of my hand, but then thought better of it and grabbed hold of it again. Welcome to the evolution of the Japanese role-playing game.

Let’s talk about linearity. You’ve no doubt already heard that FFXIII is linear; the PS3 version’s been out in Japan for nearly three months, and importing is a beautiful thing. Well, it’s true: FFXIII is linear. So linear, in fact, that for the first ten chapters – approximately 20 hours of gameplay – FFXIII feels more like a dungeon crawler than an epic, expansive JRPG. There are no side-quests to add variety. There are no towns or villages to visit. There is no over world to explore. You move forward, fight, fight, and fight, then sit back and watch a cutscene, then do it all again, pushing ever forward, never deviating from the straight and narrow path upon which you must tread. At the end of a chapter, there’s a boss fight, which is usually a pretty horrendous difficulty spike, then, a cutscene, and the next part of the tunnel. The Final Fantasy series, and indeed the JRPG genre, has always been a somewhat linear experience, punctuated by turn-based combat and beautiful CGI cutscenes, and driven by melodramatic narrative. But FFXIII is so linear that it feels like you’re adventuring through one long, dark tunnel, and there’s no light at the end of it to give you hope that at some point your journey will change course.

It’s a deliberate design decision on producer Yoshinori Kitase and co’s part, of course – an effort to lend the game what director Momotu Toriyama calls an “FPS style vibe”. He’s obviously been playing the scripted Modern Warfare series and taken notes. But the team’s gone too far in its efforts to evolve the tried and trusted – some say tired – Final Fantasy formula. The result is a sanitised, uninspiring, monotonous trudge through admittedly fabulous-looking surroundings. It’s as if you are being driven to the end of the game as you sleep in the back seat.

Other design decisions only serve to exacerbate the feeling that you’re never truly in control of what’s happening. The game dictates who is on the front line of your party – i.e., who fights in battles – for the first 20 hours of the game. It constantly switches between lead character Lightning (female Cloud), blonde-haired brute Snow, the gun-toting Sazh, Oerba Dia “jailbait” Vanille, the sultry Fang, and the Tidus-a-like Hope, progressing the story from various perspectives until all come together and the game nears its exciting climax. Once you get past the 20 hour point, and you’re finally allowed to decide the make-up of your party, it’s easy to forget that for huge swathes of the game you haven’t been able to. But occasionally, beyond that point, the game reverts to type, dictating your party make-up and defying all logic (the party travel everywhere together, so why can’t they all get involved in a scrap?).

You can’t even develop your characters the way you want to. Each party member has access to what are called “roles” – classes, really. The theory behind the system is that instead of having characters that only fulfil one role on the battlefield, such as a healer, tank, or damage dealer, each character is flexible. In a fight, at any time you can trigger a “Paradigm Switch”, which allows you to change the role of each party member. Say you begin a fight with Relentless Assault, which includes one Commando (melee), and two Ravagers (damage-based spell casting) – that’s great for doing loads of damage to your enemies. But when your party’s health starts to near zero, you’ll want to Paradigm Shift to other roles, making available new abilities. You may want to switch to Consolidation, which includes one Medic (healer) and two Sentinels (tanks), allowing you the breathing space to get everyone up to a safe number of hit points.

FFXIII dumps traditional levelling-up for a carefully-controlled system via what’s called the Crystarium. It’s a bit like FFX’s Sphere Grid. You spend Crystarium Points – gained from defeating enemies – as you travel around the Crystarium, unlocking statistical bonuses and new abilities, and gaining role levels along the way. This, in theory, is fine. The problem is, the game “caps” the Crystarium relative to each chapter, limiting the number of Crystarium Points you can spend on your party members, and which roles are available to each character. It is only when you beat a chapter end boss, and you get a “Crystarium Expanded!” message, that you’re allowed to spend more points in the Crystarium and climb up the role level ladder.

Square Enix’s goal in doing this is clear: to negate the need to grind. It’s true, for the first ten chapters of the game (about 25 hours), there is absolutely no need to grind, or backtrack (you can’t anyway), or move in any direction other than forward. But, ergo, there’s no real need to think strategically about what you spend your points on within the Crystarium. You mindlessly evolve your character along a linear skill tree path in much the same way you explore the gameworld, stopping only to occasionally check out what your new abilities do. Admittedly, from the more expansive, open field chapter 11 onwards, all of the roles become available to all of the characters, and you’re free to spend as many points in the Crystarium as you like – a good thing, because chapter 11 is much harder than what’s gone before, and the dreaded grind rears its ugly head. But by then the damage has already been done.

Conversely, however, some mechanics have evolved for the better, and FFXIII’s Active Time Battle (ATB) combat system is one of them. FFXIII’s combat is like the lovechild of FFXII’s divisive Gambit system and FFVII’s classic ATB system. Like in FFXII, all three of your front line party members are visible as you explore the game world, and you can avoid many of the enemies you see pottering about waiting to be disturbed. But unlike in FFXII, when combat is triggered there’s a short transition to a battle screen, where all the action kicks off.

FFXIII’s ATB gauge charges continuously over time. Commands cue up, each one with an associated ATB cost. At the beginning of the game, each character’s ATB Gauge only has a couple of segments, limiting the number of commands that can be cued up, but as you progress you gain more – up to a maximum of six. Once the ATB gauge is filled, all the commands play out one after the other in real time. When you’ve got a three person party going up against multiple enemies using spells, melee attacks, and other abilities, combat is a spectacular, visceral sight.

Now, you may wish to sit down for what’s coming next: you only ever control one character during combat. We know, madness, huh? What has Square Enix done? Why have you dumbed down FF for the casual noobs! Calm down, dear. It’s actually really good. Yes, you only control one character at once, but, with the Paradigm Shift function, you indirectly control everyone, and the AI is really, really good. Say, for example, you’re controlling the gunsword-wielding Lightning, with the broad-shouldered Snow, and the moody Hope backing you up, and with the Strategic Warfare Paradigm (Commando, Sentinel and Synergist) enabled. As Lightning, you’ll be concentrating on doing damage to your enemy with melee attacks – dealing physical damage to multiple targets with Blitz, perhaps, or maybe just slashing the crap out of a single target with basic attacks. While you’re busy getting your game on, the AI makes sure Snow and Hope are doing their bit as effectively and efficiently as possible: Snow attracting the attacks of enemies with tanking abilities like Challenge, and Hope buffing Lightning and Snow and himself with spells like Shell and Protect.

Of course, the AI doesn’t always do exactly what you want it to, but on the whole it’s pretty smart. For example, if you use Libra to learn an enemy’s weaknesses, your party members will automatically exploit them by using the appropriate elemental attacks. We only ever found the AI wanting when controlling Medics. When a party member is knocked out, an AI controlled Medic always prioritises raising everyone’s hit points over reviving the downed character.

FFXIII’s combat system is the best the series has seen. It’s exciting to watch, fun to use, and, most importantly, brimming with strategy and depth. The only problem with it – and this will be a big problem for some – is the Auto Battle option. Here, with one button press, you can let the computer decide which commands to cue up for you. It does such a good job that it’s all too easy to sit back and spam Auto Battle without mentally engaging in combat. The game doesn’t do itself any favours by being ridiculously easy for huge chunks of its first 30 hours, only spiking the difficulty for bosses and “Eidolon” fights – FFXIII’s disappointing transforming mechanised monsters must first be “tamed” before they can be summoned in battle. It’s perfectly possible, then, to play for hours on end by only pushing up on the left analogue stick when exploring and pressing A/X over and over again when in combat.

Story is, of course, massively important for Final Fantasy fans. The great news is that in this regard you won’t be disappointed. The game begins with a sabotaged train journey in which members of a resistance force battle against an oppressive government and what’s called The Purge – an effort to deport citizens of a spherical world called Cocoon to the underworld below. It’s all to do with horrible beings called fal’Cie, which turn ordinary people into slaves called l’Cie. L’Cie are given what’s called a Focus – an order, essentially, which they must carry out or face being turned into mindless zombies. Either way, though, l’Cie are screwed. Complete a Focus, and they turn into crystal for the rest of eternity. This is the terrifying fate that Lightning and the rest of FFXIII’s eclectic bunch of adventurers face when, early on, they are cursed as l’Cie. The game revolves around their attempt to work out what their Focus is, while unravelling the truth behind the oppressive government Purge.

FFXIII starts slowly – very slowly – but it hits its stride around the 20 hour mark, evolving into an entertaining romp packed full of drama, revelation, and more drama. The six main characters are annoying at first, but they all grow as people as their lives spiral inexorably out of control. Hope, for example, starts off as a spiky-haired whine-bag hell-bent on stabbing Snow in the back for murdering his mother. ‘Oh god’, you think, ‘not another JRPG emo!’. But as you play you can see Hope growing up. In their own ways, all of the central characters do this. It’s sophisticated, engaging, and helps drive you to finish the game despite its faults.

And then of course, there are the graphics. FFXIII – the first HD FF game – looks fantastic. The in-game character models are superb. Lightning’s hair blows in the wind, Snow’s jacket ripples realistically as he dishes out his unique blend of knuckle sandwich, and Sazh’s afro… well, it wobbles about like jelly, which isn’t realistic at all, but from a distance it looks great. Some of the environments look stunning, too. The Hanging Edge, for example, is what you imagine Midgar would look like had it been created in high definition and powered by current generation processors. The vista in the seaside city of Bodhum is up there with the best Uncharted 2 had to offer. And, Gran Pulse, the setting of FFXIII’s infamous chapter 11, is a genuine sight to behold – an open field safari packed with enormous, earth shaking four-legged beasts and rabid monsters sprinting in packs, all overlooked by the ominous vision of Cocoon hanging high in the sky. FFXIII’s sci-fi world is as colourful and vibrant as any gamer tired of dour, depressing game worlds could hope for. It is quintessentially Final Fantasy – a distinctly Japanese take on science fiction – fuelled by a wonderfully uplifting score composed by Masashi Hamauzu – that acts as the perfect antidote to the concrete, post-apocalyptic world of Fallout 3 and the lens-flare filled galaxy of Mass Effect 2.

But the CGI cutscenes will no doubt steal the show. They are, quite simply, the best ever; to our eyes as good as the Final Fantasy CGI movies. There are loads of cutscenes in FFXIII, but they are not, in isolation, offensively long, as they are in MGS4. They are bite-sized chunks of animated brilliance, and demand to be watched over and over again. But the more impressive feat is how good the “in between cutscenes” look. These cutscenes – not CGI but not in-game – look fantastic, and sometimes fool you into thinking you’re watching CGI. There can be no doubt that FFXIII is a graphical feast worthy of anyone’s high definition television.

However, it doesn’t always look fantastic. Some of the environments look bland and, dare we say it, lack detail. This is particularly true of the Vile Peaks area – a land built with the debris used by the fal’Cie to construct Cocoon. Almost all of the game’s interior sections are boring to look at – a particularly disappointing, and frustrating, sight to endure when you’re forced to spend hours soldiering through these locales. It’s particularly irksome because you know the game is capable of so much more – you’ve just seen it in the last chapter.

You all want to know about the differences between the PS3 and the 360 versions, don’t you? Of course you do. Well, here’s the truth: the PS3 version is the one to get. To our eyes, the gameplay visuals look similar across both platforms, but the cutscenes are vastly different. On PS3, and, therefore on Blu-ray, the cutscenes are displayed natively in 1080p, whereas the cutscenes in the 360 version are sub 720p. The cutscenes in the 360 version look, to the discerning eye, pixelated and blurry. But to the untrained eye, it’s a case of much ado about nothing.

Despite the superb battle system, engaging cutscenes, and interesting characters, FFXIII, ultimately, is a disappointment. Taken in isolation, it is a fun game with stunning graphics and a compelling story. But compared with the wider RPG genre, and held up against the lofty expectations of the series’ hardcore fans, it falls short. For this reason, newcomers may well enjoy FFXIII more than series’ veterans.

You just can’t escape the feeling that, in trimming the fat from the series, Square Enix has nicked FFXIII’s bone. It’s not a bad game by any stretch of the imagination; like a good song, or a slow-burning book, FFXIII grows on you the more you play it. It is, undoubtedly, the best JRPG to come out of Square Enix in a long time. But the inescapable, uncomfortable truth is that it is too linear. Without traditional JRPG features like towns, NPCs, and an over world, there is no real sense of ownership. Upon completing the game, you certainly feel as if you’ve enjoyed the 50 or so hours you’ve invested into doing so, but the experience is more throwaway than formative. Despite some incredibly tough monster hunting missions in chapter 11, there’s no variation to the game whatsoever.

FFXIII spends too long easing players into its complex systems – complex systems which, really, aren’t that complex. In a recent interview, Kitase said: “It’s better to see some people be a little bit bored” than give players too much information to digest. We had no idea he was talking about 25 hours of boredom. Toriyama recently said that lower than expected review scores are the result of press reviewing “from a western point of view”, as if to say we’re missing the point. But surely, in today’s global village and instant communication age, taking a global perspective on a high profile internationally-released video game is the only proper course of action.

As Western role-playing games have evolved, delving into open world, player-driven territory (Elder Scrolls, Fallout) and cinematic, cross-genre experiences (Mass Effect, Borderlands), Japanese role-playing games have remained largely the same – stuck in a rut, even – telling tales of teenage angst and upbeat heroic fantasy we’ve heard countless times before. We’re not saying we wanted Final Fantasy to copy WRPG mechanics. We simply wanted – quite desperately – for Final Fantasy XIII to be the best JRPG of all time. You have to hand it to Square Enix for trying to move things forward – better that than yet another rehash of the tried and tested Final Fantasy formula (the less said about Infinite Undiscovery the better). But it does so along a path so narrow and straight that you long for the days of old. When Vanille is knocked out in battle, she sometimes says: “What went wrong?” It’s a question we find ourselves wondering as well.