Thursday, April 28, 2016

Weekly Mobile Launch !!!



In past a few days some exciting hones launched. Those were very exciting an posses high power delivery and design. Although there are mobile device launches throughout each year, there are really three main landmarks where the bulk of the important stuff is clustered.

 HTC 10



HTC 10 smartphone was launched in April 2016. The phone comes with a 5.20-inch touchscreen display with a resolution of 1440 pixels by 2560 pixels at a PPI of 564 pixels per inch. The HTC 10 is powered by 1.6GHz quad-core Qualcomm Snapdragon 820 processor and it comes with 4GB of RAM. The phone packs 32GB of internal storage that can be expanded up to 2000GB via a microSD card. As far as the cameras are concerned, the HTC 10 packs a 12-Ultrapixel primary camera on the rear and a 5-megapixel front shooter for selfies. The HTC 10 runs Android 6 and is powered by a 3000mAh non removable battery. It measures 145.90 x 71.90 x 9.00 (height x width x thickness) and weighs 161.00 grams. The HTC 10 is a single SIM (GSM) smartphone that accepts a Nano-SIM. Connectivity options include Wi-Fi, GPS, Bluetooth, NFC, 4G (with support for Band 40 used by some LTE networks in India). Sensors on the phone include Proximity sensor, Ambient light sensor, Accelerometer, and Gyroscope.

Le 2 Pro



There's a version of the Le 2 Pro powered by the same Helio X20 processor, but also one powered by the deca-core Helio X25 SoC. Both are paired with 4GB of RAM, and have 32GB of storage. The Le 2 Pro has a 21-megapixel rear camera and a front-facing 8-megapixel unit. Both, the Le 2 and the Le 2 Pro have full-HD screens, 3,000mAh batteries with fast charging, and Wi-Fi 802.11ac, 4G LTE, and VoLTE for connectivity. As the name suggest, the Le Max 2 has more to boast of than the other two. There's a 5.7-inch QHD display with a resolution of 1440x2560, and a Snapdragon 820 processor. Three versions were announced: one with 4GB of RAM and 32GB of storage; one with 4GB of RAM and 64GB of storage; and one with a whopping 6GB of RAM and 64GB of storage. The battery is bigger than those of the Le 2 and Le 2 Pro, albeit by a mere 100mAh.


Le Max 2




Le Max 2 smartphone was launched in April 2016. The phone comes with a 5.70-inch touchscreen display with a resolution of 1440 pixels by 2560 pixels at a PPI of 515 pixels per inch. The Le Max 2 is powered by quad-core Qualcomm Snapdragon 820 processor and it comes with 4GB of RAM. The phone packs 32GB of internal storage cannot be expanded. As far as the cameras are concerned, the Le Max 2 packs a 21-megapixel primary camera on the rear and a 8-megapixel front shooter for selfies. The Le Max 2 runs Android 6.0 and is powered by a 3100mAh non removable battery. The Le Max 2 is a single SIM (GSM) smartphone that accepts a Regular SIM. Connectivity options include Wi-Fi, GPS, Bluetooth, 4G. Sensors on the phone include Proximity sensor, Ambient light sensor, Accelerometer. 
  

Lenovo Zuk Z2




Lenovo's Zuk brand has launched its Zuk 2 Pro smartphone in China, priced at CNY 2699 (roughly Rs. 27,600) for the 6GB of LPDDR4 RAM and 128GB inbuilt storage version. The variant will be available to order starting Friday from the company's China website. The 4GB of LPDDR4 RAM + 64GB inbuilt storage version will be available to order starting May 10. The company has not yet revealed the price for the model. The Zuk Z2 Pro runs Android 6.0 Marshmallow out-of-the-box with ZUI 2.0 skin on top. It features a 5.2-inch full-HD (1080x1920 pixels) resolution Super Amoled 2.5D curved glass display and is powered by the flagship quad-core Qualcomm Snapdragon 820 processor clocked at 2.15GHz, with Adreno 530 GPU.




Author : Saatvik Awasthi



Monday, April 25, 2016

[NEWS] Intel slashes down 12000 of it's employees for new recruitments


Intel just announced it was slashing its workforce by 11 percent, calling it in all its corporate press-speak glory “a restructuring to accelerate its transformation.”

What it means in real human terms is that 12,000 people will be losing their jobs worldwide, and that is never a good thing to hear.

“These are not changes we take lightly,”
Intel CEO Brian Krzanich said in a call with analysts and reporters.

“Acting now enables us to increase our investments in areas that are critical for our future success. This is a comprehensive initiative. We will emerge as a more collaborative, productive company with broader reach,” he said.
The fact is that Intel is a desktop computer chip maker at a time when desktop computers aren’t doing very well, and as PC sales dropped, something had to give.

Intel is trying to transform, as it so eloquently put it in its announcement, into a company that does more than provide chips for computers. In a time when people are buying more mobile devices and tablets, it’s not an area in which Intel has excelled.

Clearly Krzanich recognizes this, even if the word choice left something to be desired.
 “We’ve talked about this transformation where we are moving from a client-centric [model] to a company that focuses more and more on a broader set of products — the cloud and all the connected devices that connect to that cloud and the connectivity that brings those devices to the cloud. That includes the PC but it’s much more than that,” he said.


“We’ve made enough progress now,” he said,
and added that this move now allows it to
“push the company all the way to this transformation.”
To a large extent he’s talking about the shift in emphasis to the cloud and Internet of Things, those connected devices that will be generating increasing amounts of data in the coming decade. Intel is hoping to get a piece of that cloud and IoT action.

To that end, Intel announced a chip family geared for cloud workloads at the end of last month. The Intel Xeon processor E5-2600 v4 product set is designed to optimize software-defined clouds, according the company.

Meanwhile, as 12,000 people see their jobs disappear, The Wall Street Journal reported in February that the company paid a whopping $25 million — including an $8.1 million signing bonus and restricted stock valued at an additional $ 8.1 million — to recruit former Qualcomm executive Venkata ‘Murthy’ Renduchintala to take over the ailing chip division.

Perhaps it’s not a coincidence that Intel Capital, the investing arm of Intel, announced a shift in investing strategy in February, looking to invest in companies that Intel Capital president Wendell Brooks stated “complement what we do.”

In a statement Intel outlined the financial particulars of the decision:

Intel expects the program to deliver $750 million in savings this year and annual run rate savings of $1.4 billion by mid-2017. The company will record a one-time charge of approximately $1.2 billion in the second quarter.
 The majority of affected employees will be informed within 60 days with all the job reductions completed by mid-2017, according to the company.



Author : Anushk Keshri Rastogi

Source : click here

Saturday, April 23, 2016

[RUMOUR] Nvidia may have killed Maxwell production ahead of June Pascal launch


When Nvidia demoed Pascal at GDC 2016 last month, many readers were a bit unhappy that the company didn’t give more details on what its upcoming consumer cards would look like. The latest rumors are that Nvidia may have stopped production on its GM204 products like the GTX 980 Ti, 980, and 970 in order to quickly replace those products with a GP104 derivative.

GP104 is the name for Nvidia’s next-generation consumer-level Pascal card, and it’ll debut long before the HPC-oriented GP100 does. The full scientific version of Pascal isn’t scheduled to launch before the tail end of this year, whereas we expect to see these new consumer cards launching as early as June. This would put AMD and Nvidia on similar time frames (AMD has yet to reveal its launch dates, but we expect cards in June or July).

What can gamers expect?

Nvidia released the full whitepaper on Pascal GP100 last week, and while we can’t draw many conclusions at this juncture, there are a few things we can safely assume will be common to the two architectures. First, Pascal supports improved compute preemption compared with Maxwell. From Nvidia’s whitepaper:


"Compute Preemption is another important new hardware and software feature added to GP100 that allows compute tasks to be preempted at instruction-level granularity, rather than thread block granularity as in prior Maxwell and Kepler GPU architectures. Compute Preemption prevents long-running applications from either monopolizing the system (preventing other applications from running) or timing out."


This suggests that Pascal still won’t support asynchronous compute workloads in the same fashion AMD’s GCN hardware does, though the verdict is still out on whether this capability will play a significant role in shaping performance in a majority of DirectX 12 titles. It also suggests Pascal will see a smaller performance penalty with async compute enabled than Maxwell did, since it can interleave compute workloads more effectively than its predecessor.




The image above shows the structure of a Pascal GP100 SM unit. Pascal is heavily focused on double-precision compute, which means the GP104 variant of the chip will probably remove these units altogether, then use the space savings to pack in more single-precision FP32 cores. Expect limited double precision support, just as we saw with GM204 and GM200.

One significant difference would be the way Nvidia hits its targets. Kepler and Maxwell both relied on two different dies — the highest end cards were anchored by GK110 / GM200, while the high-end segment was anchored by GK104 / GM204. These rumors suggest that Nvidia will replace the GTX 980 Ti with the full iteration of GP104, while the future GTX *80 and *70 cards will be further cut down from this base model.


As for the bigger picture, I’d expect AMD and Nvidia to both launch GPUs with 8GB of RAM and possibly some 4GB cards depending on price brackets and targets from the two companies. Polaris and Pascal are likely to drop within weeks of each other, so I wouldn’t run out and buy a card from one company before we’ve had a chance to put them head to head.

Author : Anushk Keshri Rastogi

Thursday, April 21, 2016

Intel demos 3D XPoint, showcases Optane’s 2GB/s performance


Last year, Intel and Micron announced that they’d developed a new memory standard. This new memory, 3D XPoint (pronounced “crosspoint”) is a non-volatile memory that Intel is advertising as the first major memory breakthrough in 25 years. Early speculation was that 3D XPoint would be based or at least related-to phase change memory, but Intel has denied that this is the case without specifying the exact details of how Optane actually works. Intel has been claiming that 3D XPoint (marketed as Optane) would deliver up to 1000x the performance of NAND flash, and the company actually demoed the new technology live at Shenzhen IDF this week.

The demo video is courtesy of PC Perspective and can be seen below.




The video shows an Intel Optane drive pounding the heck out of an SSD with both drives connected via Thunderbolt. But there are some serious disparities between the two configurations. The NAND drive is copying from a SATA SSD to an external SSD connected via Thunderbolt, but the sustained copy rate of 283MB/s suggests a serious bottleneck in the system somewhere; SSD-to-SSD copy rates should be higher than that. The Optane-to-Optane copy used a PCI Express-based drive as the internal storage, which means it ought to have been compared against an SSD configured in the same fashion. As PC Perspective points out, Intel’s own top-end products can beat Optane in a head-to-head NAND-vs-whatever-the-heck-Optane-is shootout.

Early days for Optane


The short explanation for this sleight-of-hand is that Intel is still ramping up Optane (mass production is set for the end of 2016) and the hardware’s performance is likely still in early days. Furthermore, there are advantages to a non-volatile memory pool that aren’t just performance related — if Intel can build a storage array that’s far more durable than NAND with better access latencies, than it may not matter if maximum NAND performance is able to keep up with Optane. Most of the innovation happening in NAND these days is aimed at increasing its density rather than pushing performance.



It’s possible that Optane will kick off at a point similar to high-end NAND, but scale more effectively in the long term or by offering yet another storage tier between high-speed, low-density RAM and traditional hard drives.



There’s no word on whether or not Optane will come to consumer products or how useful it would be if it did. The performance gap between SSDs and HDDs is still larger than the gap between even an older SSD and a modern high-end drive. Optane isn’t expected to match DRAM performance, and that’s more or less what it would need to do to give users a performance boost to match what SSDs offered over traditional hard drives.

Author : Anushk Keshri Rastogi

Source : click here


Wednesday, April 20, 2016

[LEAK] Nvidia Pascal GeForce GTX 1080 Pictures Leaked


A photograph showing Nvidia’s upcoming flagship Pascal GeForce GTX 1080 graphics card has just made its way to the web. It depicts a silver graphics card shroud, clearly bearing the Nvidia logo on the right and an engraved “GTX 1080” on the left. The allegedly new NVTTM cooler maintains the same familiar aesthetic look of Nvidia’s current GeForce lineup. Featuring a silver metallic body with black accents and a small acrylic window sitting above the vapor chamber and heatsink fin array.




The new design adopts a much more aggressive look with many sharp angles throughout the exterior of the metallic exoskeleton. Although notably, only the cooling unit’s shroud is visible in this picture. It’s difficult to tell if it includes the actual heatsink as no fins are visible behind the acrylic window. Additionally, we can clearly see that there’s no printed circuit board beneath the shroud, no PCIe connector or rear bracket. So this is just the external body that’s allegedly of a GTX 1080 and not an actual graphics card.
An earlier photo that was leaked online showed another, identical, metallic shroud with the “GTX 1070” moniker engraved instead. We could very well be looking at the creation of a modder that designed these shrouds for his own personal amusement. There’s no way of telling whether these are actual Nvidia made or the product of a creative enthusiast.



Whether Nvidia will actually name their next generation GTX 980 and GTX 970 replacements GTX 1080 and GTX 1070 has been a subject of fierce debate. It’s important to point out that there’s very little evidence that Nvidia will even use a 1000 series naming convention for its upcoming products rather than introduce an fresh one as they have historically done over the years every time a given naming scheme had ran out.

Author : Anushk Keshri Rastogi

Source: click here


Wednesday, April 13, 2016

[REVEALED] Asrock and others confirm the Intel's new Broadwell-E chipset Family Processor !



While Intel confirmed their flagship Core i7-6950X processor on their own webpage last week, ASRock has confirmed the rest of the processors which will be part of the Broadwell-E family. Based on the 14nm node developed for the Broadwell architecture, the Broadwell-E lineup will make its way to the market in Q2 ’16 (Computex) along with new entries in the X99 boards from AIBs like ASUS, MSI, Gigabyte and others.


“The most unmissable part of Intel Broadwell-E is the flagship Core i7-6950X, which will be the first deca-core processor for the commercial market,” 
ASRock said in a press release on its website. And yeah, there’s more. ASRock went on to confirm the rest of the line up too.


While this new CPU boasts a compelling 10-cores-and-20-threads architecture, users require a BIOS update for their motherboards to handle it; this update applies to the rest of the Broadwell-E gang, including i7-6900K, i7-6850K, and i7-6800K as well,” 
the press release says. ASRock didn’t spell out the specs of the others but they’re expected to be: eight-core, six-core, and six-core, respectively.

More Leaks on the Subject 




Besides Intel’s own accidentally (on purpose?) slip, which confirmed that the Core i7-6950X would hit speeds of up to 3.5GHz and have 25MB of cache, MSI “leaked” news, too.

Earlier this month, MSI said its X99 motherboards were ready for Broadwell-E. MSI’s press release, however, was far more coy and used screenshots and performance numbers from a Xeon chip instead. Gigabyte also quietly added “Support 2016 Q2 coming new CPU” in a BIOS update pushed out in January.


So obviously, this has been the worst-kept secret. The only real unknown is how much Intel will charge for the CPU. When the chip first popped up on the leak radar, many people assumed the price would be $1,000.

Intel has basically charged a grand for its top-end processor since the days of the first quad-core “Bloomfield” Core i7-965 Extreme Edition. That price held when Intel added two more cores to the Core i7-990X. Several generations later, when Intel “gave” consumers two more cores still, for a total of eight in the Core i7-5960X, the price remained $1,000.

With the 10-core Core i7-6950X though, there are indications Intel may ramp up the price to $1,500. Again, Intel has never confirmed nor talked about the CPU on the record, but rumors of the higher price have been hot and heavy since January.

That has consumers balking. But Intel may have good reason for the increase. Intel’s top-end Core i7 chips have always just been repurposed Xeon chips with a few features turned off. Intel makes serious bank off of Xeons and doesn’t want to cannibalize those sales. If the 10-core Xeon is coming in at a higher price, that could funnel down to the i7-6950X

The Complete Intel Broadwell-E Family

The Complete list of all the Enthusiast Level processors of the Intel's new Broadwell line-up i.e. Broadwell-E family are the new Intel Core i7 6950X, 6900X, 6850K, 6800K with their specification being :



Sources: source 1 | source 2



Wednesday, April 6, 2016

[REVEALED] Nvidia's Monstrous Pascal Graphic Card Tesla P100




The first full-fat GPU based on Nvidia's all-new Pascal architecture is here. And while the Tesla P100 is aimed at professionals and deep learning systems rather than consumers, if consumer Pascal GPUs are anything like it—and there's a very good chance they will be—gamers and enthusiasts alike are going to see a monumental boost in performance.

The Tesla P100 is the first full-size Nvidia GPU based on the TSMC 16nm FinFET manufacturing process—like AMD, Nvidia has been stuck using an older 28nm process since 2012—and the first to feature the second generation of High Bandwidth Memory (HBM2). Samsung began mass production of faster and higher capacity HBM2 memory back in January. While recent rumours suggested that both Nvidia and AMD wouldn't use HMB2 this year due to it being prohibitively expensive—indeed, AMD's recent roadmap suggests that its new Polaris GPUs won't use HBM2—Nvidia has at least taken the leap with its professional line of GPUs.


The result of the P100's more efficient manufacturing process, architecture upgrades, and HBM2 is a big boost in performance over Nvidia's current performance champs like the Maxwell-based Tesla M40 and the Titan X/Quadro M6000. Nvidia says the P100 reaches 21.2 teraflops of half-precision (FP16) floating point performance, 10.6 teraflops of single precision (FP32), and 5.3 teraflops (1/2 rate) of double precision. By comparison, the Titan X and Tesla M40 offer just 7 teraflops of single precision floating point performance.



Memory bandwidth more than doubles over the Titan X to 720GB/s thanks to the wider 4096-bit memory bus, while capacity goes up to 16GB. Interestingly, the Tesla P100 isn't even a fully-enabled version of Pascal; it's based on the company's new GP100 GPU, with 56 of its 60 streaming multiprocessors (SM) enabled. The GP100 die, with a surface area of 610 square millimetres, is roughly the same size as the GM200 Titan X. Rather than shrink down the die thanks to the smaller 16nm process, Nvidia has instead chosen to simply fill the same space up with a lot more transistors—15.3 billion of them to be precise—almost doubling that of the top-end GM200 Maxwell chip.










The P100 also supports NVLink, a proprietary interconnect announced way back in 2014 that allows multiple GPUs to connect directly to each other or supporting CPUs at a much higher bandwidth than currently offered by PCI Express 3.0. It also supports up to eight GPU connections, rather than the four of PCIe and SLI.
  
Huang also teased at the time that systems packing Pascal graphics would wind up being 10 times faster than Maxwell-based systems—but at GTC 2016, as he unveiled the P100, he upped the ante, saying that certain tasks will see a 12-fold increase in speed. A task that completes in 25 hours on a Maxwell-accelerated PC may take just two hours on a Pascal system, he claimed.










Launching alongside the Tesla P100 is Nvidia's DGX-1 Deep Learning System. The DGX-1 features eight Tesla GP100 GPU cards providing 170 teraflops of half-precision performance from its 28,672 CUDA cores. The DGX-100 also features two 16-core Intel Xeon E5-2698 v3 2.3GHz CPUs, 512GB of DDR4 RAM, 4x 1.92TB SSD RAID, dual 10GbE, 4x InfiniBand EDR network ports, and requires a maximum of 3200W of power. It'll cost a mere $129,000 (~£103,000) when it launches in June. Those after a P100 in other types of server will have to wait a little longer, with cards expected to reach systems in 2017, likely due to binning and HBM2 manufacturing constraints.

Those same restraints may mean that consumer graphics cards based on Pascal, such as the rumoured GTX 1080 and 1080Ti, are more likely to feature good old GDDR5, or perhaps even GDDR5X, a higher bandwidth version of the technology intended to compete with HBM. Even if they do, the innate processing grunt of Pascal will still make a huge difference to performance in 3D applications like video games and virtual reality. The FP32 performance—which is the most important for games—is still 43 percent higher in the Pascal GP100 than the Maxwell GM200.
With Pascal now at least partly outed thanks to the P100, expect Nvidia to drop details on the consumer cards soon—probably before E3 in mid-June.

Author : Saatvik Awasthi

 Source: Click Here

Tuesday, April 5, 2016

[ANNOUNCED] Nvidia announces Quadro M5500 GPU with 2048 CUDA cores !



Together with the VR Ready announcement, Nvidia unveiled the Quadro M5500 graphics card, a mobile GPU geared towards enabling VR-capable performance in mobile workstations.




The M5500 is Nvidia’s most powerful mobile graphics card (tied with the desktop-class GTX 980 found in some notebooks, except that this is a Quadro SKU), coming packed with 2048 CUDA cores. The Maxwell GPU is linked to 8 GB of GDDR5 memory over a 256-bit interface, which enables a 211 GB/s memory bandwidth running at 3.3 GHz (6.6 GHz effective). Performance for single floating point precision sits at 4.67 TFlops.


Nvidia would not provide standard clock speeds, stating that the M5500's clock is decided by the OEM implementing it. Naturally, that would depend on how much thermal headroom there is. Under ideal conditions, on full blast the M5500 can burn through 150 W, which is a huge power figure for any laptop to handle.




Nvidia said that it announced the M5500 to handle the demands of VR applications. In contrast to typical workstation or gaming needs, where Full HD at 30 frames per second is acceptable, in VR, the demands are much higher, with resolutions close to full HD per eye and both needing to run at more than 90 frames per second for a smooth experience.





Because you cannot buy the M5500 graphics card alone, pricing hasn’t been announced. Instead, it is up to the mobile workstation makers to decide. Given that this is Nvidia’s most powerful mobile graphics card, and a Quadro SKU to boot, it's going to have a premium price tag.

Author : Anushk Keshri Rastogi

Source: Click Here




Friday, April 1, 2016

[NEWS] The largest online battle in gaming history just concluded!



And yes, you guessed it right! It's yet another EVE Online megabattle. Its biggest one yet. 

This time, the coalitions battling for supremacy were the 40,000+ players strong CFC and a very newly formed coalition being called 'LSV.' The damage has to be in the hundreds of thousands of real world dollars, as it does in huge wars like this in-universe in EVE. Whew! 


Click here for the report on the war explaining everything on the EVE subreddit and here for an amazing fanmade trailer for what is being termed 'the Easter War.' 


Here's a quote from the report linked above!





The political structure of Eve before the war was CFC, a super-coalition of 40,000 members+ having total dominance in the north of the map, in an area called null sec (or 0.0 space, it's lawless and can be player owned). The hallmark of CFC is enormous numbers of people in generally cheap doctrines (doctrine being a set of ships and tactics) to outnumber an enemy. They were considered to be totally unassailable, possessing manpower and resources far beyond even the most powerful of entities in Eve.
Low Sec (0.1 - 0.4 space) is another area of space, and has some laws (not many though). The LowSec entities (known collectively as LSV) are constantly fighting over "moons" (a way of passively generating income for a player group), and their hallmark is obscenely expensive and skill intensive doctrines, to make up for comparatively very small numbers of players.
CFC, the big group up north, have been stagnating because no one wants to fight them (they're known for making fights not fun, by intentionally lagging servers, avoiding fights and when they do fight, bringing so many people they can't possibly lose). To counter-act this, they declared war on LSV to take their moons (the passive income thingys) and force them to fight.
This didn't work. Instead of steamrolling the LSV groups with minimal preparation and effort, they got crushed in pretty much every engagement. By this I mean they'd lose full fleets and kill only one or two ships in return. Gradually they got a little better, but they almost never did "well," almost always losing, and continued to be demolished by fleets that at times were a quarter their size or less.
To counter-act this, they prepared better and got more numbers. In response, the LSV entities put aside their constant squabbling and war mongering to band together into what is affectionately known as "Forming Voltron." (thus the name, Low Sec Voltron – LSV). LowSec Alliances might constantly fight and war with their rivals, but they all hate one thing above all others, and that’s outsiders. The same thing happened again, with CFC losing fights, but on a much larger scale with fights involving thousands of pilots.
After not only defending all their own moons, the LowSec entities proceeded to wipe CFC out of LowSec, taking all their valuable moons in the process. While this was happening, one of the larger Alliances in the CFC (who are a coalition of alliances) pissed of a group called I Want Isk (IWI), and enormously rich and powerful gambling organisation. Something about theft and betrayal, but regardless, they decided to pay these low sec groups to get revenge against the CFC for them (and is likely a major catalyst in them forming together so quickly).
Having successfully expelled CFC from Low Sec, LSV looked for future targets, and with likely direction from the IWI (gambler guys) and Tishu's BLOPs (battleships with a very long range jump drive to attack farming ships) campaign in Fade, set their sights on the north. With the assistance of virtually every major entity in Eve, who answered the call to arms from either being paid by IWI or the glory of the next major war, the new Coalition (who have yet to decide on an official name, although Money Badger Coalition (MBC) seems to be a front-runner) have begun an invasion.
Spread across numerous regions and hundreds of systems, MBC have begun to systematically drive out CFC from their homes. Currently most of the alliance sin the CFC are in full retreat, after having lost several regions that were previously thought to be impregnable. As it currently stands, a large portion of the CFC have been ordered to withdraw to the far north, the home of Goonswarm, the leaders and core of the CFC. A recent address by the leader of goonswarm indicates they intend to use the north as a base to harass the allies as they grind the regions in order to control them totally. As the allies begin to grind out the regions which are increasingly being left undefended, the last few pockets of resistance such as the Co2 Alliance are gradually being worn down.
It is assumed that at some point the allies will move further north, once their latest conquests are secure, to take the fight to Goons. If this happens, you can be almost certain that we will see another battle such as that of B-R5RB several years ago (you can look that up, CFC won that one), which resulted in hundreds of thousands of dollars’ worth of assets being lost.
In other words, it’s the war of a century in Eve, with pretty much the entirety of the PvP groups in the game all allied against a single super-coalition. Regardless of who wins, it's going to be a really cool time to be in the game.

~TL;DR~The largest coalition in the game decided to take a poke at the numerically inferior Low Sec alliances. Instead of crumbling as expected to the superpower, they banded together and pushed them back out of their area of space, taking all of the big coalition's income in the area as they did.
Once people saw it was possible to beat this super-coalition, most of the player groups in the game decided to band together, with encouragement from the enormously rich I Want Isk (IWI) gambling organisation who have grievances with the super-coalitions's component alliances.
Today marked a major victory in taking the strategically important staging system of one of the super-coalition's player groups which caused that group to flip sides to the attackers.
~Very TL;DR~Big War.
Big group attack little group.
Little group win.
Little group attack big group.
Everyone attack big group now.
Big group losing. Badly.


 Author:
Ishaan