Monday, 2 March 2015

Samsung on defense against Apple with the Galaxy S6

Samsung Galaxy S6

Samsung’s Galaxy S6, announced at Mobile World Congress in Barcelona yesterday, looks like a really nice upgrade from the disappointing Galaxy S5 — which apart from a screen and camera resolution bump and some software refinements, was basically a GS4, and missed sales projections by up to 40 percent as a result.
That said, it’s almost impossible to look at the Galaxy S6 and not think “iPhone 6.” The Galaxy S6 takes several big cues from Apple’s recent upgrade — just as the latter company aped some of Samsung’s innovations the last time around. And so the cycle continues.
This has been a tit-for-tat for years. And there’s a reason for Samsung's almost brazen copying: iOS is now on top once again inU>S> smartphone share, for the first time in three years.
The most obvious thing Apple did in response to Samsung last September is to increase the size of the iPhone, as well as offer it in two sizes instead of just one. Some pundits have said they doubt Steve Jobs would have ever done this, as he was clearly committed to the 4-inch (or 3.5-inch), one-handed design of all previous iPhones. The iPhone’s expanded notification bar and swipe-up settings also take cues from earlier Samsung phones, and iPhones now let you install third-party keyboards like Swype. (I’m mixing up hardware and software developments here, but with Apple, they’re basically one in the same, as Apple also controls iOS).
This time around, Samsung copied a number of iPhone 6 design elements, starting with the rounded, brushed metal edges. The bottom edge of the GS6 in particular is almost identical to the iPhone 6, from the speaker grille to the charger port and headphone jack. There’s no more removable battery or expandable microSD card slot, moves that are already lighting up the Internet with complaints. The GS6’s upgraded camera sticks out a bit on the back, just like with the two newest iPhones, although the back of the GS6 is made of Gorilla Glass 4 and not aluminum. And Samsung has added support for Samsung Pay, giving the company its own payment solution to compete with Apple Pay. (Neither phone has a sapphire screen.)
Galaxy-S6-table
The Galaxy S6 Edge is more distinctive, and contains a refined, double-sided version of the previous Galaxy Note Edge. This is certainly an interesting design, and combined with the superlative resolution of the Galaxy S6 display, can result in some real software and UI innovations if Samsung keeps developing for it. I’m intensely curious to see how this variant sells.
That brings us to software. Samsung’s problem, and Android’s problem in general, is getting the app market in line. Developers continue to release for iOS first, for a variety of reasons. Android handsets have great appeal–a much wider variety of hardware configurations, and eminently configurable and hackable software. Samsung’s phones, for example, let you tune voice quality.
As for the other enhancements in the GS6, I’d argue they were mostly necessary. Google is trying to rein in the use of microSD cards; the whole app vs. media storage thing is confusing, and there’s no reason why phones can’t come with enough storage to begin with. The removable battery probably isn’t as big a deal, although Samsung has to make sure its various retail partners know how to replace the battery down the line as it loses capacity for charge.
In the end, I’d argue Samsung had to take some steps closer to the iPhone in order to remain competitive. I really like Samsung’s renewed focus on design, after years of plasticky handsets that looked and felt inferior to HTC and Apple models, even if they were superior in many cases. The move away from Qualcomm to its own Exynos proccesor is interesting, and should do a lot for Samsung’s bottom line. And it’s hard not to think that this GS6 is what the lukewarm GS5 refresh should have been.
Both the Galaxy S6 and S6 Edge are drool-worthy Android phones. It remains to be seen if Samsung — and Android in general — can gain back some U.S. market share from Apple as a result.

Poland’s new optical atomic clock will keep better time than all previous clocks

Atomic Clock

There is a new, super-accurate clock ticking away in National Laboratory for Atomic, Molecular and Optical Physics (KL FAMO) in Poland. Although, it’s not really ticking because this is an optical atomic clock, one of only a few in the world. It’s so accurate that it would take billions of years to reach an error of one second. To put it another way, if you started an optical atomic clock at the moment of the big bang, it would have lost about one-tenth of a second by now. Now that’s something you can set your watch by.
The new Polish clock is no desktop timepiece. It occupies four rooms at KL FAMO and has three main components — an atomic standard, an optical comb, and an high-precision laser. Optical atomic clocks are still very new technology, but they have the potential to be even more accurate than the traditional cesium fountain clocks used for time-based experiments all over the world. In past tests of optical atomic clocks, researchers have found they had no trouble keeping time to the standards of the National Institute of Standards and Technology.
Researchers chose to use strontium 88 atoms as the atomic standard in this clock. Strontium 88 is the most common isotope of the alkaline earth metal, but radioactive strontium 87 can be used in one of the two standard containment chambers for increased accuracy. Strontium 88 is stable with a half-life measured in billions of years. The atoms are suspended in a vacuum at below 10 microkelvins. To record the passage of time with these atomic standard, you simply shoot a laser at them. Okay, it’s not that simple.
Strontium atomsThe laser emits light at a frequency of 429 terahertz, illuminating the atoms. The frequency of the laser is tuned to match the oscillations of the atomic standard. That frequency is far too high to count electronically, thus you can’t use it to keep time directly. That problem is bypassed with the optical comb. The optical frequency comb (another type of pulsing laser) is used to fire extremely short bursts of light that can be synchronized with the high-frequency main laser. It basically translates the unreadable high-precision laser into radio frequencies, which can then be counted electronically.
There are a lot of components to fine tune and things to test before the KL FAMO clock will be ready for use in experiments. You can’t just flip a switch and turn this clock into one of the most accurate timepieces on the planet. The early data collected from the clock indicate that it is already the most accurate clock in Poland, which has a number of conventional atomic clocks. having a highly-accurate way of tracking the passage of time is of vital importance when testing aspects of general relativity and particle physics.

Intel finally agrees to pay $15 to Pentium 4 owners over AMD Athlon benchmarking shenanigans

AMD vs. P4

Intel has agreed to settle a class action lawsuit that claims the company “manipulated” benchmark scores in the early 2000s to make its new Pentium 4 chip seem faster than AMD’s Athlon. Intel will pay affected consumers $15 if they purchased a Pentium 4 system between November 20, 2000 and June 30, 2002. Affected systems include all systems with a Pentium 4 CPU purchased between November 20, 2000 and December 31, 2001 — and all systems with a first-gen Willamette P4 or all P4s clocked below 2GHz, between January and June 2002. The exception is Illinois — if you live in Illinois and bought a P4, too bad for you.
Don’t worry about digging up a receipt for the purchase — the only thing you’re required to do is list the model number of the system you bought, and you qualify for the $15 reimbursement . You are required to verify under penalty of perjury that you belong to the stated class, but that’s the extent of the problem. Intel will also make a $4 million donation to an education fund as part of its settlement.

Did benchmark manipulations impact AMD’s relative performance?

Short answer? Yes.
Longer answer: Yes, and we can prove it.
Chipzilla vs. AMD
Let’s look at two cases. First, there’s Sysmark. AMD CPUs were extremely competitive in Sysmark 2000, but fell far behind the Pentium 3 and Pentium 4 in Sysmark 2001’s Internet Content Creation tests. An investigation turned up the reason why — instead of simply checking to see if a CPU supported SSE, Windows Media Encoder 7 checked for the “GenuineIntel” string. Since AMD chips didn’t have it, the program refused to use SSE for AMD’s processors.
At the time this was treated as an unusual case and once-off — not a systemic campaign to damage AMD’s performance in system benchmarks. In actual fact, this was an early example of Intel’s “Cripple AMD” compiler function in action. It didn’t matter if AMD chips actually supported SIMD instructions — programs compiled with Intel’s compiler would refuse to use those instructions on AMD processors. (Sysmark 2002 was redesigned to blatantly favor and promote the P4, but that’s another story altogether).
More troubling is the issue of POV-Ray 3.6.0, which prides itself on being open-source. While this program was released somewhat later, it dropped simultaneously with Prescott’s launch (2004). When I tested it ten years ago, I found its performance to be extremely odd — the included benchmark ran slower on both AMD and Intel Northwood hardware compared to POV-Ray 3.5, but Prescott was much quicker.

A tech journalist scorned…

I wrote about this and declared it an example of benchmark shenanigans. In response, POV-Ray declared that I was lying. In an open letter, POV-Ray wrote “Our source code is openly available. In fact if you had cared to you could have downloaded both the v3.5 and v3.6 source code from our FTP site and compared them for any such tweaks – something that you did not, it appears, do.”
The funny thing is, I did do that — but the programmer friend who helped me with Intel’s compiler could never reproduce the results in POV-Ray 3.6.0, despite compiling six different executables with different optimization levels in an attempt to do so.
Fast forward almost a decade. A few months ago, I decided to play with a Perl script that can strip the “Cripple AMD” functions out of executables compiled by Intel compilers. I tested it on the copy of POV-Ray 3.6.0 I’ve kept on hand ever since. Please note that I tested using modern hardware and under Windows 7, not a 2004-era system. Not only did it detect and strip out the “Cripple AMD” function, the impact on performance was rather dramatic. (Note: POV-Ray 3.5 was not compiled with an Intel compiler. POV-Ray 3.6.0 was.).
POV-RAY 3.6.0
I want to stress that this doesn’t mean AMD’s performance was crippled by 50% in the original test — but it’s clear that contrary to what the POV-Ray team was saying then, the 3.6.0 version of the test was compiled in a manner that tilted the competitive landscape towards Intel. But surely that’s a one-time thing, right? An artifact from ten years past?
Well, no. Not exactly. I also took Sysmark 2012 out for a spin, and applied the same script to strip out the Cripple AMD function from both the benchmark and its satellite applications.
Sysmark2012
Even allowing for run variance, the gaps in some tests are far too wide. To be fair, this doesn’t necessarily point to Intel cheating, because all of these applications are the work of third parties. The Sysmark 2012.exe, still showed performance differences after I patched it (According to Bapco, the Sysmark 2012.exe is compiled with Microsoft Visual Studio, not ICC, but the same strings were still detected and adjusted). Not all the performance improvement comes from that change, however, which illustrates how complicated the performance-measuring field can be. It’s difficult to declare a benchmark “neutral” when the application it runs is compiled in a manner that benefits one vendor over another.
The principle reason no one makes a big deal about these gaps anymore is because the difference between Intel and AMD has simply grown too wide. An 8-12% systemic improvement for Intel may make AMD look worse than it otherwise would, but AMD’s performance in Sysmark 2012 can lag Intel by as much as 50% — and that’s not something that compiler patches can fix.
Not every benchmark compiled with Intel’s compiler shows evidence of this kind of shift — I tested every Cinebench test going back to 2000 on the A10-7850K, and while several executables were compiled with Intel’s compiler, none of them show any signs of performance difference when patched. But it’s interesting to see how compiler choices continue to influence the performance of supposedly neutral benchmarks.

Intel’s third-generation Xeon Phi to use 10nm technology, deploy second-generation Omni-Path fabric

Xeon Phi

Intel has announced that its third-generation Xeon Phi, codenamed Knights Hill, will deploy on 10nm technology and feature the second iteration of Intel’s Omni-Path fabric. Knights Hill is quite a ways out — Intel’s Knights Landing, which is based on 14nm technology, won’t launch until the summer of 2015, which means Knights Hill is likely a 2017 (or later) part.
Currently, Intel’s highest-end MIC (Many Integrated Core) is Knights Corner, a 22nm design with 50 or more cores and a design that derives from Intel’s classic Pentium (P54C), albeit with 512-bit AVX units and an entirely different memory architecture. Knights Landing will be built on 14nm and deploy the same Silvermont architecture that powers Intel’s Bay Trail. In a major departure, however, that future iteration of the core will support four threads per CPU — currently Silvermont doesn’t use Hyper-Threading at all.
Knights HIll
Data on Knights Hill is currently extremely limited, but Intel is making the announcement now to reassure customers that there’s a roadmap stretching out beyond the Knights Landing product and the 14nm node. The first generation of Intel’s Omni-Path scaling architecture will debut next year. So far, Intel has focused on expanding the per-core capabilities of the Xeon Phi family rather than simply piling on more CPUs. Somewhere between 50-72 cores seems likely, though this could always creep up to 128 cores or more for the 10nm variant.
Future versions of the core will likely expand both the onboard memory pool (16GB is expected for Knights Landing; Knights Hill could pack 32GB or more), add additional bandwidth, and likely increase the interconnect performance between the CPU and the associated MIC. Intel might push its AVX standard up as high as 1024-bit registers, but this is unclear and likely depends on trends in the HPC community. Adding wider registers might seem like a simple way to boost performance, but it’s subject to the same diminishing returns as everything else. The current AVX specification allows for extensions of up to 1024 bits in length, however, so Intel has left this option open in the long term.
Knights Landing
Knights Landing (the next card up for release) will feature on-package memory and the Silvermont core.
If Intel is introducing quad-threading into the Silvermont core for Knights Landing, it suggests that the company will keep this iteration of the CPU (and its multi-threading capabilities) for more than one generation. Whether it continues to build that capability out or whether the multi-threading is related to HT or uses a different type of resource allocation is still unknown. Companies like Sun and IBM have historically struck balances between the amount of threading in a core and its total single-thread throughput, and we expect Intel to do the same, even if Xeon Phi is explicitly designed for multi-threaded workloads.
Omni-Scale has been rebranded as Omni-Path, but the benefits are the same
Omni-Scale has been rebranded as Omni-Path, but the benefits are the same.
Omni-Path is Intel’s next-generation networking interconnect that offers up to 100Gbps of bandwidth and will rely on Intel’s silicon photonics technology for signaling. The new standard offers up to 48 ports per switch compared to 36 ports on other top-end standards, and is designed to lower the cost of huge build-outs by reducing the total number of switches. The longterm goal is to reduce latency and allow for more effective scaling as the industry pushes forwards towards the elusive exascale goal.
For now, however, it still seems that Nvidia has pulled ahead in the overall performance game. If K80 ships out before Knights Landing, it’ll give Nvidia a further lead. All of this is complicated by the fact that HPC users may or may not be interested in rewriting software to take advantage of new APIs — Intel and Nvidia have traded rhetorical shots on that topic before, and we don’t expect to see that change anytime soon.

Intel’s 14nm Broadwell chip reverse engineered, reveals impressive FinFETs, 13-layer design

Intel Core M/Broadwell-Y chip

When Intel announced the details on its 14nm process last year, it raised eyebrows in some circles by claiming some extremely aggressive scaling figures. Put simply, Intel stated that it would deliver a better 14nm process with superior characteristics, die size, and overall efficiency than any competitive product TSMC, its largest foundry competitor, would release on 20nm. This predictably kicked off a PR blizzard between the two companies.
Intel stated that it would bring 14nm in with substantial scaling in transistor fin pitch, transistor gate pitch, and interconnect pitch, with a further significant reduction in SRAM scaling. Now, independent analysis and reverse engineering from Chipworks has confirmed that Intel did indeed deliver on its technological promises. Gate pitch has been measured at ~70nm, fin pitch at ~42nm, and a more complex 13-layer metal design. Intel had previously stuck with nine-layer designs before stepping up to 11 for its Bay Trail SoC.
Image courtesy of Chipworks
The FinFET transistors of a 14nm Broadwell chip, as seen from above in plan view. [Image credit: Chipworks]
Image courtesy of RealWorldTech.
Image courtesy of RealWorldTech. As chip designs shrink, metal layers have become more complicated
Metal layers inside a chip are used to connect various features and areas of the chip. As chips have gotten smaller it’s become increasingly difficult to route wires in ways that don’t obviate the increased performance of the transistors themselves. Intel’s decision to step up to a 13-layer design may be partly responsible for Broadwell’s difficulties; the more metal layers you have to connect the more difficult it is to design the chip efficiently.
The one potential slip that Chipworks notes is that while Intel claimed a 52nm interconnect pitch, they measured 54nm — but they also say that this is within the margin of measurement error, and that Intel may have simply measured from a different point of the die. They also confirm that Intel hit its SRAM cell target size of 0.058 µm2.
A 14nm Broadwell chip, side-on, showing all 13 layers
A 14nm Broadwell chip, side-on, showing all 13 layers
Another shot of the fins of the 14nm Broadwell FinFET transistors
Another shot of the fins of the 14nm Broadwell FinFET transistors

What does this mean for Broadwell?

So, what’s the big picture mean for Intel’s hardware? It means that I’m more inclined to think that the problems of the Lenovo Yoga 3 Pro are either caused by Lenovo’s design decisions or by power management software. OS level drivers could also be an issue. Accurately hitting its process node targets doesn’t necessarily say anything about the underlying chip — Broadwell might still use more power than Intel projected, for example, or it might not reach target frequencies. It might hit all these metrics but have trouble with yields.
At the very least, this data suggests that Intel was playing it straight when it declared its 14nm technology would be a huge step forward and match historic scaling goals. Whether or not Intel can parley those advantages into improving its cost structure and wafer costs is still a very open question. With 450mm wafers on hold and EUV still uncertain, the higher cost at each additional node could still poison any semiconductor manufacturers’ attempts to push to lower process technologies — it’s just not clear when that will happen.
Here’s what I suspect it means, strictly speaking for myself: Broadwell may well push down into power envelopes that complete with "little core"products, but the user experience people get will be very dependent on what kind of design choices the OEM makes. An improperly-cooled Broadwell may indeed feel like an Atom. A well-cooled design should be quite a bit stronger. Ultimately, however, Broadwell doesn’t break the laws of physics — and the laws of physics dictate rather strongly that there’s a heat cost for every degree of computation you perform. At a certain point, Broadwell’s “big core” scale-down and Atom’s “little-core” scale up are going to meet and match each other.

Nokia N1: An iPad Mini clone that runs Android 5.0, priced at just $250

Nokia N1 tablet

What looks like Apple’s iPad Mini, but has better specs, is considerably cheaper, and runs a stock version of Android 5.0 Lollipop? The new Nokia N1 tablet, apparently. At just $250 with 32GB of storage — as opposed to the iPad Mini 3’s base price of $400 for the 16GB model — the Nokia N1 is definitely priced to sell.
Just yesterday, Nokia — as in the networking equipment company that wasn't acquired by Microsoft — announced that it would be licensing the Nokia name to device makers. Today, it seems Chinese electronics giant Foxconn is the first company to take up that offer with the Nokia N1 Android tablet.
The Nokia N1 bears a striking resemblance to the iPad Mini. It has the same 7.9-inch 2048×1536 screen, the same bezels, the same anodized aluminium unibody chassis, and very similar camera, button, and headphone jack placement. Even the bottom of the N1 looks like an iPad Mini, with two speaker grilles flanking a small, central port. (Incidentally, that port on the bottom of the N1 is one of the first Reversible USB Type-C connectors, not Apple’s Lightning connector.)
There’s also no home button, nor any chamfered edges — but curiously, the N1 is slightly lighter (318 grams vs. 331 grams) and thinner (6.9 mm vs. 7.5 mm) than the iPad Mini 3. In terms of raw hardware specs, the Nokia N1 and Apple’s iPad Mini 3 are fairly similar. The N1 is powered by a quad-core Intel Atom Z3580 SoC (Bay trail/Moorefield) which should compare favorably with the iPad Mini’s A7 SoC, or Qualcomm’s Snapdragon 805. The N1 has an 8-megapixel camera on the back, vs. 5MP for the iPad Mini 3 — and its WiFi goes up to 802.11ac, rather than the Mini’s rather ghetto 802.11n. The iPad Mini 3 does have a larger battery than the N1, however. We have no idea how these spec differences will play out in practice, of course, but on paper at least the Nokia N1 is pretty hot — especially priced at $250, some $150 cheaper than the iPad Mini 3.
Nokia N1, in hand

Nokia N1 innards
Other than price, the main difference is that the Nokia/Foxconn N1 runs a stock version of Android 5.0 Lollipop — stock, that is, except for the inclusion of Nokia’s newfangled Z Launcher app. Z Launcher, according to Nokia, is a very simple app launcher that “adapts to you.” It shows the apps that you’re likely to open — and for other apps, you can “scribble” the app’s first letter on the screen (“u” would bring up Uber, “i” would bring up Instagram, etc.) The launcher is available on Google Play today for Android smartphones, but for tablets it’s exclusive to the Nokia N1 (it’s meant to be a sweetener).
Read: Why Android 5.0 Lollipop is already coming to phones and tablets
The Android tablet space is an interesting one. On the one hand, the Galaxy Note line of tablets and phablets has been well received — and of course, Amazon’s cheap-and-cheerful Kindle Fire tablets appear to sell quite well. On the other hand, though, Android tablets clearly aren’t a hit in the same way as the iPad and iPad Mini. The Android tablet market has always given off a slight air of immaturity, probably because there has never been a stand-out device that has inspired users and developers to take the segment seriously. Maybe the N1, with stock Android 5.0, is that tablet?
The Nokia N1 vs. iPad Mini 3
The Nokia N1 vs. iPad Mini 3. I removed the branding, just so you can see how similar they are.
Nokia N1 tablet: Even the product photography is a direct rip-off of Apple
Nokia N1 tablet: Even the product photography is a homage to Apple
The Nokia N1 will go on sale in China in or around February 2015, priced at around $250 (before taxes). It will then roll out to some countries in Europe and Russia. No word on whether it’s coming to the US — but given now the Nokia brand is much more recognizable/valuable on that side of the Atlantic, I wouldn’t be surprised if the US has to wait a long time.

2015 Toyota Camry review: The best Camry ever facing the best competition ever

IMG_6024-001-camry-hero-feature-img

The 2015 refresh of the Toyota Camry adds more tech and makes America’s best-selling car one of the very best cars, too. Adaptive cruise control, blind spot detection and lane departure warning are all available. The tech doesn’t stop there: the Camry is one of the few cars offering Qi wireless charging. The base four-cylinder Camry gets 28 mpg and the hybrids hit 40 mpg.
Toyota’s challenge is that a half-dozen other midsize sedans are also very good, each with its own attributes and each with its own quirks. Against Camry’s all-round competence, Camry now provides a navigation system lacking a quick-access map button on the center stack, a USB jack that can’t charge tablets, and an excellent JBL audio upgrade that can’t be had on Camrys costing less than $30,000. In other words, the new Camry has annoyances but not necessarily deal-breakers.
IMG_6172

New in 2012, major refresh in 2015

The 2015 Toyota Camry is a major refresh of the seventh-generation 2012 model, enhanced to maintain competitiveness the last two years of Camry’s typical five-year model run. For 2015, Toyota added a second sporty trim line, improved the interior quality and noise insulation levels, and reworked the exterior (thus the “bold new Camry” tagline). A year ago, Toyota engineers rushed in structural changes after the 2012-14 Camry fared poorly on offset-crash tests.
The 2015 Camry offers the holy trinity of driver assist technologies: adaptive cruise control, blind spot detection, and lane departure warning. In addition, there’s pre-collision warning, rear cross traffic alert, and OnStar-like onboard telematics. For that, you’ll need the higher trim lines. All Camrys get USB, Bluetooth, a rear camera, and a center stack display ranging from four to seven inches.
Toyota Camry 2015

Easy phone charging: Just drop it on the console

With the Qu (“chee”) wireless charging system, Camry is ahead of all but a few of the 250 car models sold here. With this $75 option, a transducer is embedded in the base of the center stack. Stick your Qi-compatible phone there and it charges any device drawing up to 5 watts, meaning virtually all smartphones.
IMG_6160Wireless charging is a nice step forward once it reaches critical mass. As the Wireless Power Consortium boasts, “Right now there are 688 Qi-certified products.” But that does not include the Apple iPhone except by adding a charging cover that makes the phone bulky, similar to adding an extended-battery pack. If wireless charging takes off, Apple will have to join the party, fashionably late, just as it did in being among the last with screens of more than 5 inches.
You may be annoyed when you find out the Camry’s USB jack cannot charge a tablet or other device drawing more than 5 watts (photo above right). You may be even more annoyed to learn the 2015 refresh of the $15,000 Toyota Yaris does charge tablets. Go figure.
Toyota Camry 2015

On the road: Quieter ride, better handling

Toyota Camry 2015Camry is among the best and best-selling midsize sedans, meaning five-passenger cars of about 190 inches (4800 mm) length. It has always been the one that provides a smooth ride for drivers who want to get from A to B without drama. You can still do that. Now all 2015 Camrys are a bit tauter in handling and ride at the same time the cabin is markedly quieter. For those who want sporty, Toyota added an upmarket XSE (sport) trim line above the continued SE sport trim line. If that interests you, make sure to drive it along with your partner or spouse to make sure the ride isn’t too firm.
Camry comes in four- and six-cylinder gasoline models with six-speed automatic transmission along with a four-cylinder Camry hybrid using a continuously variable transmission. All are front-drive. When the seventh generation Camry arrived in 2012, the fuel economy was pretty good compared to the competition. Now the 178 hp four is mid-pack at 25 mpg highway, 35 mpg highway, 28 mpg combined. It’s fine on highways, passable getting up to speed on the ramp if you’re firm pressing on the gas accelerating. The 268 hp V6 doesn’t cost you much in economy at 21/31/25.
The three hybrid trim lines are fuel economy stars at 40 mpg city, 38 mpg highway, 40 mpg combined. It’s 1-3 mpg better with the entry Hybrid LE trim line.